Upload
nehal-verma
View
96
Download
2
Tags:
Embed Size (px)
Citation preview
Introduction To AI Systems
By: Nehal Varma
What is Artificial Intelligence?
AI is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans.
some of the activities computer with artificial intelligence are designed for include:
speech reconization Learning Planning Problem Solving
History
Alan Turing
The Turing Test was the first serious proposal in the philosophy of artificial intelligence in 1956.
Agent Definition: An agent perceives its environment via sensors and acts
in that environment with its effectors.
agent = architecture + program
Intelligent Agent An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment through actuators.
Example Of Agents:
A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and so on for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets.
Rational Agents
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
For example: Car Wiper, Vaccum Cleaner
Environment Types
Fully observed: An agent’s sensors give it access to the complete state of the environment at each point in time.
Deterministic: The next state of environment is completely determined by the current state and action executed by the agent.
Episodic: The agent experience is divided into atomic episodes and the choice of actions & quality in each episodes depends only on the episode, because subsequent episodes do not depend on what actions occur in previous episodes. Do not need to think ahead.
Types Of Agents
Simple Reflex Agents
Model Based Reflex Agents
Goal Based Agents
Utility Based Agents
Simple Reflex Agents
It is the simplest type. These agents select actions on the bases of current percept.
Has no memory.
EX: if car-in-front-is-breaking then initiate-braking
function REFLEX-VACUUM-AGENT( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Simple Reflex Agents
Model Based Reflex Agents
It keeps track of the current state of the world using an internal model(Environment). It then chooses an action in the same way as the reflex agent.
Goal Based Agents
Knowing something about the current state of the environment is not always enough to decide what to do. For example. at a road junction, the taxi can turn left, turn right, or go straight on.
The agent needs some sort of goal information. for example, passenger's destination. Sometimes goal-based action selection is straight
forward. Sometimes it will be more tricky—for example, when the agent has to consider long sequences of twists and turns in order to find a way to achieve the goal.
Utility Based Agents
Goals alone are not enough to generate high-quality behavior in most environments. For example, many action sequences will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others.
How well can the goal be achieved (degree of happiness)