7/28/2019 Lecture 1 - Intelligent Agent
1/11
2013-01-23
1
Ning Xiong
Mlardalen University
Lecture 1: Intelligent Agents
Outline
What is the concept of agent?
Rationality of agent behavior Task environments for agents
Types of intelligent agents
7/28/2019 Lecture 1 - Intelligent Agent
2/11
2013-01-23
2
What is Agent
An agent is anything that can be viewed as
perceiving its environment through sensors andacting upon that environment through actuators
to change the states of the environment
Agent Examples
Human agent
Sensors: eyes, ears, and other organs for feeling;Actuators: hands, legs, mouth, and other body parts
Robotic agentSensors: cameras, sonar, and infrared range finders
Actuators: robotic arm, various motors
7/28/2019 Lecture 1 - Intelligent Agent
3/11
2013-01-23
3
Agent Function and Program
The agentfunction maps from percept histories
to actions:
[f: P*A]
Agent behavior decided by the agent function
The agentprogram runs on the physicalarchitecture (computing device) to implement
the agent function f
Rational agents
An rational agent should strive to "do the rightthing", based on what it can perceive and theactions it can perform. The right action is the one
that will cause the agent to be most successful
7/28/2019 Lecture 1 - Intelligent Agent
4/11
2013-01-23
4
Why Not Saying Optimal Agents
The inherent uncertainty in real-world makes the optimal
solution not exist or hardly to judge
1. The imperfect or incomplete knowledge about the
world makes it impossible to judge optimality of acts
2. The chance factor in real-world makes outcome
stochastic and uncertain.
3. The consequence of an action by an agent is affected
by actions of other agents
Rational Choice
O11
O12
O21
O22
A1
A2
p11
p12
p21
p22
Decisionnode
EU(A1)=u11p11+u12p22EU(A2)=u21p21+u22p22
A rational agent will prefer act A1 to act A2 provided that
EU(A1)>EU(A2)
7/28/2019 Lecture 1 - Intelligent Agent
5/11
2013-01-23
5
Agent Environment Types
Fully observable (vs. partially observable): An agent'ssensors give it access to the complete state of theenvironment at each point in time.
Deterministic (vs. stochastic): The next state of theenvironment is completely determined by the currentstate and the action executed by the agent.
Environment types
Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environmentitself does not change with the passage of time
but the agent's performance score does)
Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
Single agent (vs. multiagent): An agent
operating by itself in an environment.
7/28/2019 Lecture 1 - Intelligent Agent
6/11
2013-01-23
6
Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Deterministic Deterministic No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
Intrinsically the real world is partially observable,stochastic,, dynamic, continuous, multi-agent
Agent types
Four types of agents in order of increasing
generality:
Simple reflex agents
Model-based reflex agents
Goal-based agents
Learning agents
The course covers key techniques for designing
various types of intelligent agents
7/28/2019 Lecture 1 - Intelligent Agent
7/11
2013-01-23
7
Simple reflex agents
Use if-then rules to define mapping from percepts to actions
Behavior-based intelligence without reasoning (in robotics)
Lecture 3: Fuzzy rule-based control for decision making
Model-based reflex agents
World model
When the world state is not fully observable, use the model of
the environment to estimate the current state given the sensor
observations
7/28/2019 Lecture 1 - Intelligent Agent
8/11
2013-01-23
8
Model-based state estimation
Consider a process monitoring problem where we want to estimate the state
of process given sensor measurements. The available model for the
process is as follows
where x denotes the process state, y the sensor measurements.
Bayesian-based filtering methods such as Kalman filtering and particle
filtering to update and refine state estimates.
)1()1()( kwkFxkx
)()()( kvkxHky
One lecture in the course, which will discuss
how agents can utilize multi-sensor data andother information to better estimate the
hidden states of the environment and to
acquire better situation awareness
Multi-sensor data fusion
7/28/2019 Lecture 1 - Intelligent Agent
9/11
2013-01-23
9
Goal-based agents
Analyze and predict resulting outcomes of possible actions,
choose the most promising action to satisfy the goal
Goal-Based Agents
Lecture 5: decision theory and analysis for
helping selection of rational actions
Lecture 6: How to make decisions by
exploiting previous experiences
Lecture 8: How to find a set of interesting,
nondominated solutions in the continuous
space with multiple conflicting objectives
7/28/2019 Lecture 1 - Intelligent Agent
10/11
2013-01-23
10
Learning agents
Learning element: modify agent functions in decision making
Decision making: agent function to select external actions
Critic: evaluate how well agent is doing
Environment
Sensors
Actuators
Decision
making
Critic
Learning
element
feedback
Learning Mechanisms in DVA406
Fuzzy adaptive control: on-line modification of fuzzy
decision rules based on performance feedback.
Lecture 4
Other agent learning approaches like reinforcement learning are
given in the Learning Systems course (CDT407).
7/28/2019 Lecture 1 - Intelligent Agent
11/11
2013-01-23
Recommended Reference on Agents
Chapter 2: Intelligent agents, in: Artificial
Intelligence: A modern approach, by Stuart
Russel and Peter Norvig, Prentice Hall, 2002.