Chap2 Agent

Embed Size (px)

Citation preview

  • 7/28/2019 Chap2 Agent

    1/23

    Intelligent Agent

    Chapter 2

  • 7/28/2019 Chap2 Agent

    2/23

    2

    Outline

    Agent and Environment

    Rationality

    Performance Measure

    Environment Type

    Agent Type

  • 7/28/2019 Chap2 Agent

    3/23

    3

    Agent interacting with Environment

    Agents include Human, robots, softbots, thermostats, etc.

    The agent function maps from percept histories to actions

    The agent program runs on the physical architecture to produce f

  • 7/28/2019 Chap2 Agent

    4/23

    4

    Fig 2.3

  • 7/28/2019 Chap2 Agent

    5/23

    5

    Rational AgentCharacteristics of Rational Agent

    tries to maximize expected value of performance measure

    performance measure = degree of success

    on the basis of evidence obtained by percept sequence

    using its built-in prior world knowledge

    Rational Agent : agent that does the RIGHT things

    Rational not= Omniscient, clairvoyant, successfulRight Decision vs. Lucky decision

    example : Playing lotto

  • 7/28/2019 Chap2 Agent

    6/23

    6

    RationalityRationality Information-Gathering, learning,autonomy

    Information-Gathering

    Modify future perceptsExploration unknown environment

    LearningModify prior knowledge

    Autonomy

    Learn to compensate for partial and incorrect prior knowledgeBecome independent of prior knowledge

    Successful in variety of environment

    importance of learning

  • 7/28/2019 Chap2 Agent

    7/237

    Performance Measure, Environment,

    Actuators, Sensors

    To design a rational agent, we must specify task

    environment which consists of PEAS (Performance

    Measure, Environment, Actuators, Sensors)

    Taxi Driver

    Performance measure : safety, fast, legal, confortable trip,

    maximize profits

    Environment : Roads, other traffic, pedestrains, customers

    Actuators : steering, accelerator, brake, signal, horn, display

    Sensors : cameras, sonar, speedometer, GPS, odometer,

    accelerometer, engine sensors, keyboard or microphone to accept

    destination

  • 7/28/2019 Chap2 Agent

    8/238

    Internet Shopping Agent

    Performance Measure :

    Environment :

    Actuators :

    Sensors :

  • 7/28/2019 Chap2 Agent

    9/239

    Properties of Task Environments

    Fully observable vs. Partially observable

    Deterministic vs. Stochastic

    Strategic : deterministic except for actions of other

    agents

    Episodic vs. Sequential

    Static vs. DynamicDiscrete vs. Continuous

    Single Agents vs. Multiagent

    Competitive, cooperative

  • 7/28/2019 Chap2 Agent

    10/2310

    Task Environment Types

    8-puzzle BackgammonInternet

    Shopping

    Medical

    diagnosis

    Taxi

    driving

    Observable ?

    Deterministic ?

    Episodic ?

    Static ?

    Discrete ?

    Single-agent ?

    Real world is

  • 7/28/2019 Chap2 Agent

    11/2311

    2-3. Structure of Intelligent Agents

    perception action?

    Agent = architecture + program

  • 7/28/2019 Chap2 Agent

    12/2312

    Types of agentsFour basic types

    Simple Reflex Agent

    Reflex Agent with state that keeps track of the world

    Also called model-based reflex agent

    Goal-based agent

    Utility-based agent

    All these can be turned into Learning Agents

    Generality

  • 7/28/2019 Chap2 Agent

    13/2313

    (1) Simple Reflex Agentcharacteristics

    no plan, no goal

    do not know what they want to achieve

    do not know what they are doing

    condition-action rule

    Ifcondition then action

    architecture - [fig. 2.9]

    program - [fig. 2.10]

  • 7/28/2019 Chap2 Agent

    14/23

    14

    Fig 2.9 Simple reflex agent

  • 7/28/2019 Chap2 Agent

    15/23

    15

    Fig 2.10 Simple Reflex Agent

  • 7/28/2019 Chap2 Agent

    16/23

    16

    (2) Model-based Reflex AgentsCharacteristics

    Reflex agent with internal state

    Sensor does not provide the complete state of the world.

    Updating the internal world

    requires two kinds of knowledge which is called model

    How world evolves

    How agents action affect the world

    architecture - [fig 2.11]

    program - [fig 2.12]

  • 7/28/2019 Chap2 Agent

    17/23

    17

    Fig 2.11 A model-based Agent

  • 7/28/2019 Chap2 Agent

    18/23

    18

    (3) Goal-based agentsCharacteristics

    Action depends on the GOAL . (consideration of future)

    Goal is desirable situation

    Choose action sequence to achieve goal

    Needs decision making

    fundamentally different from the condition-action rule.

    Search and Planning

    Appears less efficient, but more flexibleBecause knowledge can be provided explicitly and modified

    Architecture - [fig 2.13]

  • 7/28/2019 Chap2 Agent

    19/23

    19

    Fig 2.13 A model-based, Goal-based Agent

  • 7/28/2019 Chap2 Agent

    20/23

    20

    (4) Utility-based agentsUtility function

    Degree of happiness

    Quality of usefulness

    map the internal states to a real number (e.g., game playing)

    Characteristics

    to generate high-quality behavior

    Rational decisions are made

    Looking for higher Utility value

    Expected Utility Maximizer

    Explore several goals

    Structure - [fig 2.14]

  • 7/28/2019 Chap2 Agent

    21/23

    21

    Fig 2.14 A Model-based, Utility-based Agent

  • 7/28/2019 Chap2 Agent

    22/23

    22

    Learning AgentsImprove performance based on the percepts

    4 components

    Learning elements Making improvement

    Performance elements

    Selecting external actions

    Critic

    Tells how well the agent doing based on fixed performancestandard

    Problem generator

    Suggest exploratory actions

  • 7/28/2019 Chap2 Agent

    23/23

    23

    General Model of Learning Agents