KEMBAR78
Module 1 | PDF | Artificial Intelligence | Intelligence (AI) & Semantics
0% found this document useful (0 votes)
25 views117 pages

Module 1

AI module 1 Vit chennai

Uploaded by

achyut.dsa.2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views117 pages

Module 1

AI module 1 Vit chennai

Uploaded by

achyut.dsa.2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 117

BCSE306L-ARTIFICIAL

INTELLIGENCE
Dr. Kavipriya G
Assistant Professor Senior grade 1
Scope- VIT Chennai
Cabin details: AB3 First Floor Annexure 109 A cabin no 24
COURSE OBJECTIVES

1. To impart artificial intelligence principles, techniques and its history.

2.To assess the applicability, strengths, and weaknesses of the


basic knowledge representation, problem solving, and learning
methods in solving engineering problems.

3.To develop intelligent systems by assembling solutions to


concrete computational problems.
COURSE OUTCOMES
On completion of this course, student should be able to:
1.Evaluate Artificial Intelligence (AI) methods and
describe their foundations.
2.Apply basic principles of AI in solutions
that require problem- solving, inference, perception,
knowledge representation and learning.
3.Demonstrate knowledge of reasoning,
uncertainty, and knowledge representation for solving real-
world problems
4.Analyze and illustrate how search algorithms
play a vital role in problem-solving
Textbook
•Russell, S. and Norvig, P. 2015. Artificial Intelligence - A Modern Approach, 3rd Edition, Prentice Hall.

•K. R. Chowdhary, Fundamentals of Artificial Intelligence, Springer, 2020.

Reference Books

• Alpaydin, E. 2010. Introduction to Machine Learning. 2nd Edition, MIT Press.


BCSE306L- Theory evaluation procedure
Assessment Marks
CAT - 1 15
CAT - 2 15
DA - 1 10
DA - 2 10
DA - 3 10
FAT 40
Total 100
MODULE 1

TOPICS
• Introduction- Evolution of AI, State of Art -Different Types of
Artificial Intelligence-Applications of AI-Subfields of AI-Intelligent
Agents- Structure of Intelligent Agents-Environments
INTRODUCTION

Intelligence Artificial Artificial Intelligence

• Branch of computer Science


• Imitate the roles of human brain
INTRODUCTION
• What is Artificial?
• Made or produced by human beings rather than occurring naturally, especially
as a copy of something natural.

• What is Intelligence?
• The ability to acquire and apply knowledge and skills.
ARTIFICIAL INTELLIGENCE
Artificial Intelligence:
Artificial Intelligence, is the ability of a computer
to
act like a human being.
Introduction to AI / What is Intelligence?
● Intelligence, taken as a whole, consists of
the following skills:- the capacity to learn and solve
problems

1. the ability to acquire and perceive


2. the ability to understand and
apply knowledge
3. the ability to act and
communicate ideas
Intelligence – Amazon Alexa
Ability to Acquire and Perceive:
• acquires information through voice commands from users.
• When a user speaks to Alexa, it activates its microphones to capture
the voice command, which is then converted into digital data.
Ability to Understand and Apply:
• After acquiring the voice command, Alexa uses (NLP) algorithms to
understand the intent and meaning behind the user's request.
Ability to Act and Communicate:
• Once Alexa has understood the user's request, it acts by executing
the requested action
• Alexa communicates back to the user by confirming the
action
verbally or through visual feedback on devices with screens.
WHERE WE ARE?
EVOLUTION OF AI
THE TURING TEST
1950 – Alan Turing devised a test for
intelligence called the Imitation Game
• Ask questions of two entities, receive
answers from both
• If you can’t tell which of the entities is
human and which is a computer
program, then you are fooled and we Questions
should therefore consider the computer Answers Answers
to be intelligent

Which is the person?


Which is the computer?
A Brief History of AI: 1950s
• Computers were thought of as an electronic brains
• Term “Artificial Intelligence” coined by John McCarthy
• John McCarthy also created Lisp in the late 1950s
• Alan Turing defines intelligence as passing the Imitation
Game (Turing Test)
• AI research largely revolves around toy domains
• Computers of the era didn’t have enough power or memory to solve useful
problems
• Problems being researched include
• games (e.g., checkers)
• primitive machine translation
• blocks world (planning and natural language understanding within the toy domain)
• early neural networks researched: the perceptron
• automated theorem proving and mathematics problem solving
• McCulloch and Pitts (1943)
• Neural networks that learn
History of AI
• Minsky (1951)
• Built a neural net computer

• Darmouth conference (1956):


• McCarthy, Minsky, Newell, Simon met,
• Logic theorist (LT)- proves a theorem in Principia
Mathematica-Russel.
• The name “Artficial Intelligence” was coined.

• 1952-1969
• GPS- Newell and Simon
• Geometry theorem prover - Gelernter (1959)
• Samuel Checkers that learns (1952)
• McCarthy - Lisp (1958), Advice Taker, Robinson’s
resolution
• Microworlds: Integration, block-worlds.
• 1962- the perceptron convergence (Rosenblatt)
History of Artificial Intelligence
• 1950: Alan Turing publishes Computing Machinery and Intelligence.
• 1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at
Dartmouth College.
• 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural
network that 'learned' though trial and error.
• 1980s: Neural networks which use a backpropagation algorithm to train itself
become widely used in AI applications.
• 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match
(and rematch).
• 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
• 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a
convolutional neural network to identify and categorize images with a higher rate of
accuracy than the average human.
• 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol,
the world champion Go player, in a five-game match
Shakey-AI First Robot-1960s
4 Categories of Definition
for
• AI that act like
Systems
humans
• Systems that think like
humans
• Systems that think rationally
Systems that
• Systems thatthink Systems that think
act rationally
like humans rationally

Systems that act Systems that act


like humans rationally
1. Acting Humanly
• Also called Turing Test Approach
• The art of creating machines that perform functions that
require intelligence when performed by people
• i.e., making computers to act like humans.
• Example : Turing Test
Turing Test Approach
• Provides satisfactory operational definition for intelligence.

• Provided by conducting Turing Test

• The Turing Test is a method of inquiry in artificial intelligence (AI)


for determining whether or not a computer is capable of thinking
like a human being. The test is named after Alan Turing, the founder
of the Turing Test
The Turing Test Approach

• Woman, Machine & Judge.


• Pass the test ?
• if the interrogator cannot tell
Which one’s the computer? • if there is a computer or
• a human at the other end.
A
B
Qualities required to pass Turing Test
• The system requires these abilities to pass the test

• Natural language processing


• for communication with human
• Knowledge representation
• to store information effectively & efficiently
• Automated reasoning
• to retrieve & answer questions using the stored
information
• Machine learning
• to adapt to new circumstances
Total Turing Test
• Can test subject’s perceptual abilities

• To pass the total Turing Test, the computer will need


• Computer Vision to perceive objects

• Robotics to manipulate objects and move about


2. Thinking Humanly
• Making computers to think like humans
• Goal is to build systems that function internally in some
way similar to human mind
• Also called cognitive modeling approach
Cognitive Modeling Approach
• If we are going to say that a
• given program thinks like a human,
• Give a way of determining how humans thinks;
• We need to get inside the actual workings of human minds.
• Precise theory of the mind =>becomes possible to express the theory
as a computer program.

• Example GPS(General Problem Solver)


• comparing the reasoning steps of the program and human solving
the same problems.
Cognitive Science
• Combines computer models of AI and experimental techniques of psychology.

• Try to construct precise and testable theories of working of human mind.

• Cognitive Science

• Is an interdisciplinary science

• draws on many fields (as psychology, artificial intelligence, linguistics, and


philosophy) in developing theories about human perception, thinking, and
learning
3. Thinking Rationally
• Also called Laws of Thought approach
• Thinking “Right Thing”
• Making computers to think the “Right Thing”
• Relies on logic(to make inferences) rather than human to measure correctness.
• Logic: provide a precise notation for statements about all kinds of things in
the world and the relations between them.
• Syllogism: Provide patterns for argument structures
• Always give correct conclusion given correct premises.
• For example,
• Premise: John is a human and all humans are mortal
• Conclusion: John is mortal
• Can be done using logics. Example Propositional and predicate logic.
Two obstacles: Laws of Thought of Approach
• It’s not easy to take informal knowledge and state it in the formal
terms required by logical notation, particularly when the knowledge
is less than 100% certain.
• Being able to solve a problem “in principle” and doing so “in
practice” are very different. Even problems with just a few
hundred facts can exhaust the computational resources of
any computer unless it has some guidance as to which
reasoning steps to try first.
• i.e., 1. Informal Knowledge is not precise.
2. Difficult to model uncertainty
3. Theory and practice cannot be put together.
4. Acting Rationally – the rational agent approach
• An agent is just something that acts (agent comes from the Latin agere, to
do). Of course, all computer programs do something, but computer
agents are expected to do more: operate utonomously, perceive their
environment, persist over a prolonged time period, adapt to change, and
create and pursue goals.
• Also called Rational Agent Approach.

• Doing “Right Thing”


• Rational Agent – acts to achieve the best outcome.
• An agent acts rationally if it selects the action that maximizes it performance
measure.
Rational Agent Approach
• Design of rational agent
• Advantages
• More general than laws of thought approach
• Concentrates on scientific development

Limited Rationality
• acting appropriately when there is not enough time to do all
computation
STATE OF THE ART

Robotic vehicles
• A driverless robotic car named STANLEY sped through
the rough terrain of the Mojave desert at 22 mph, finishing the
132- mile course first to win the 2005 DARPA Grand Challenge
2004: Barstow, CA, to Primm, NV

• 150 mile off-road robot race


across the Mojave desert
• Natural and manmade hazards
• No driver, no remote control
• No dynamic passing
• Fastest vehicle wins the race
(and 2 million dollar prize)
ROBOTICStanley
VEHICLES
Robot
Stanford Racing Team www.stanfordracing.org

• Robotic vehicles: A driverless robotic car named STANLEY sped


through the rough terrain of the Mojave dessert at 22 mph,
finishing the 132-mile course first to win the 2005 DARPA Grand
Challenge
SENSOR INTERFACE PERCEPTION PLANNING&CONTROL USER INTERFACE

RDDF database corridor


Top level control Touch screen UI

pause/disable command
Wireless E-Stop
Laser 1 interface
RDDF corridor (smoothed and original) driving mode
Laser 2 interface

Laser 3 interface road center


Road finder Path planner
Laser 4 interface laser map

map trajectory
Laser 5 interface Laser mapper VEHICLE
Camera interface Vision mapper
vision map INTERFACE
obstacle list Steering control
Radar interface Radar mapper
vehicle state (pose, velocity) Touareg interface
vehicle
GPS position
state Throttle/brake
UKF Pose
control Power server
estimation
vehicle state (pose, velocity) interface
GPS compass

IMU interface velocity limit


Surface
assessment
Wheel velocity

Brake/steering
heart beats Linux processes start/stop emergency stop

health status
Process controller Health monitor
power on/off
data

GLOBAL Data logger File system


SERVICES
Communication requests Communication channels clocks

Inter-process communication (IPC) server Time server


Planning = Rolling out Trajectories
Game playing
IBM's DEEP BLUE became the first computer program to defeat the world champion in a chess
match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match
Speech recognition
• A traveler calling United Airlines to book a flight can have the en-
tire conversation guided by an automated speech recognition and
dialog management system.

Autonomous planning and scheduling:


Spam Logistics
fighting planning
Robotics
• Roomba robotic vacuum cleaners for home
use
• PackBot – to handle hazardous materials,
explosives
Applications of AI
• Some of the applications are given below:
• Business : Financial strategies, give advice
• Engineering: check design, offer suggestions to create new
product
• Manufacturing: Assembly, inspection & maintenance
• Mining: used when conditions are dangerous
• Hospital : monitoring, diagnosing & prescribing
• Education : In teaching e-tutoring
• household : Advice on cooking, shopping etc.
• farming : prune trees & selectively harvest mixed crops.
Applications of AI
• Robots
• Chess-playing program
• Voice recognition system
• Speech recognition system
• Grammar checker
• Pattern recognition
• Medial diagnosis
• Game Playing
• Machine Translation
• Resource Scheduling
• Expert systems (diagnosis, advisory, planning, etc)
• Machine learning
Subfields of AI
Activity
• To what extent are the following computer systems
instances of artificial intelligence:
• Supermarket bar code scanners.
• Web search engines.
• Voice-activated telephone menus.
•Internet routing algorithms that respond
dynamically to the state of the network.
What are Agent and Environment?
• An agent is anything that can perceive its environment through sensors and
acts upon that environment through effectors(actuators).
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors,
and various motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
Agents and environments
• Percept: the agent’s perceptual inputs
• Percept sequence: the complete history of everything the agent has
perceived
• Agent function maps any given percept sequence to an action [f: p*
A]
• The agent program runs on the physical architecture to produce f
• Agent = architecture + program
Agent Function and Agent Program
• The agent function that maps any given percept
histories(sequence) to an actions:
[f: P*  A]
Agent: Percept*  Action *
The ideal mapping specifies which actions an
agent

to take
at any point in time
• The agent program runs on the physical architecture to
produce agent function f
• Agent = Architecture + Program
Vacuum-cleaner world

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOp
Rational agents-The Concept of rationality
• A rational agent is an agent whose acts try to maximize some
performance measure.
• An agent should strive to "do the right thing", based on what it can
perceive and the actions it can perform.
• The right action is the one that will cause the agent to be most
successful
• Performance measure: An objective criterion for success of an agent's
behavior
• E.g., performance measure of a vacuum-cleaner agent could be amount
of dirt cleaned up, amount of time taken, amount of electricity consumed,
amount of noise generated, etc.
Rational agents
• Rational Agent: For each possible percept sequence, a
rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
Rational agents
• Rationality is distinct from omniscience (all-knowing with infinite
knowledge)

• Agents can perform actions in order to modify future percepts so as


to obtain useful information (sometimes called information
gathering, exploration)

• An agent is autonomous if its behavior is determined by its own


experience (with ability to learn and adapt)
•The rationality of an agent depends on four things
• the performance measure defining the agent's
degree of success
•the percept sequence, the sequence of all the
things perceived by the agent
•the agent's prior knowledge of the environment
•the actions that the agent can perform
Task environment
• To design a rational agent we need to specify
a task environment
• a problem specification for which the agent is a
solution

• PEAS: to specify a task environment


• Performance measure
• Environment
• Actuators
• Sensors
PEAS: Specifying an automated taxi driver

Performance
measure:
•?
Environment:
•?
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
•?
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians,
customers
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians,
customers
Actuators:
• steering, accelerator, brake, signal,
horn
Sensors:
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians,
customers
Actuators:
• steering, accelerator, brake, signal,
horn
Sensors:
PEAS model - Activity
• Google Voice Assistant
• Smart Fitness Tracker
• Smart Refrigerator
• Home Security System
Properties of task environments

• 1.Fully observable vs. Partially observable


• If
an agent’s sensors give it access to the complete state
of the environment at each point in time then
the environment is effectively and fully observable
• if the sensors detect all aspects

• That are relevant to the choice of action

Task environment-Choice,Action and Outcome


• Partiallyobservable
An environment might be Partially observable
because of noisy and inaccurate sensors or
because parts of the state are simply missing
from the sensor data.
Example:
• A local dirt sensor of the cleaner cannot tell

• Whether other squares are clean or not


Properties of task environments

• 2.Deterministic vs. stochastic


• next state of the environment Completely determined
by the current state and the actions executed by the
agent, then the environment is deterministic, otherwise,
it is Stochastic.

-Cleaner and taxi driver are:


• Stochastic because of some unobservable aspects 🡪

noise or unknown
Properties of task environments

• 3.Episodic vs. sequential


• An episode = agent’s single pair of perception & action
• The quality of the agent’s action does not depend on other
episodes
• Every episode is independent of each other

• Episodic environment is simpler


• The agent does not need to think ahead

• Sequential
• Current action may affect all future decisions
.Eg. Taxi driving and chess
Properties of task environments

• 4.Static vs. dynamic


• A dynamic environment is always changing over time
• E.g., the number of people in the street

• While static environment


• E.g., the destination

• Semidynamic
• environment is not changed over time
• but the agent’s performance score does
Properties of task environments

• 5.Discrete vs. continuous


• Ifthere are a limited number of distinct states, clearly defined
percepts and actions, the environment is discrete
• E.g., Chess game
• Continuous: Taxi driving
Properties of task environments

• 6.Single agent VS. multiagent


• Playing a crossword puzzle – single agent
• Chess playing – two agents
• Competitive multiagent environment
• Chess playing

• Cooperative multiagent environment


• Automated taxi driver

• Avoiding collision
Properties of task environments

• 7.Known vs. unknown


This distinction refers not to the environment itself but to
the agent’s (or designer’s) state of knowledge about the
environment.
-In known environment, the outcomes for all actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to
learn how it works in order to make good decisions.(
example: new video game).
Properties of task environments

Deterministic and Discrete:


Example: Sudoku puzzles where each move leads to a predictable outcome and there are a finite
number of cells and numbers to fill in.
Deterministic and Continuous:
Example: satellite orbiting the Earth
Continuous:
The environment is deterministic because the future state of the satellite (its position and velocity)
can be precisely predicted based on its current state and the physical laws governing its motion
(Newton's laws of motion and the law of universal gravitation).
Continuous:The environment is continuous because the satellite's position and velocity can take
any value within a range.
Properties of task environments

Stochastic and Discrete:


Example: Board games involving dice rolls, like Monopoly, where each
roll introduces an element of randomness.
Stochastic and Continuous:
Example: Stock market prediction, where prices change continuously
and are influenced by many unpredictable factors.
Examples of task environments
Structure of agents
Structure of agents
• Agent = architecture + program
▪Architecture = some sort of computing device (sensors
+ actuators)
▪(Agent) Program = some function that implements the
agent mapping = “?”
▪Agent Program = Job of AI
Agent programs
• Input for Agent Program
• Only the current percept
• Input for Agent Function
• The entire percept sequence
• The agent must remember all of them
• Implement the agent program as
• A look up table (agent function)
Agent programs

• Skeleton design of an agent program


Agent Programs

•P = the set of possible percepts


• T= lifetime of the agent
⮚ The total number of percepts it receives

• Size of the look up table


Agent programs
• Despite of huge size, look up table does what we want.
• The key challenge of AI
• Find out how to write programs that, the extent
to possible, produce rational behavior
• From a small amount of code

• Rather than a large amount of table entries

• E.g., a five-line program of Newton’s Method


• V.s. huge tables of square roots, sine, cosine,

Types of agent programs

• Four types
• Simple reflex agents

• Model-based reflex agents

• Goal-based agents

• Utility-based agents
Reflex Agent Diagram

Sensors

Environment
What the world is like
now

Condition-action
rules What should I do now

Agent
Actuators
Reflex Agent Diagram 2

Sensors
What the world is like
now

Condition-action
rules What should I do now

Agent Actuators

3 January 2024
Environment
Reflex Agent Program
•application of simple rules to situations

function SIMPLE-REFLEX-AGENT(percept)
returns action
static: rules //set of condition-
action rules
condition := INTERPRET-INPUT(percept)
rule := RULE-MATCH(condition,
rules) action := RULE-ACTION(rule)
3 January 2024
Simple reflex agents

• It uses just condition-action rules


• The rules are like the form “if … then …”
• efficient but have narrow range of applicability
• Because knowledge sometimes cannot be stated
explicitly
• Work only
• if the environment is fully observable

3 January 2024
Simple reflex agents

3 January 2024
A Simple Reflex Agent in
Nature
percepts
(size, motion)

RULES:
(1) If small moving object,
then activate
SNAP
(2) If large moving object,
then activate
AVOID and inhibit
SNAP
ELSE (not moving) then
needed for
NOOP
completen Action: SNAP or AVOID or NOOP
3 January 2024
ess
Model-based Reflex Agents
• For the world that is partially observable
• theagent has to keep track of an internal state
• That depends on the percept history

• Reflecting some of the unobserved aspects

• E.g., driving a car and changing lane

• Requiring two types of knowledge


• How the world evolves independently of the agent
• How the agent’s actions affect the world

3 January 2024
Model-based reflex agents

3 January 2024
Example Table
Agent With
THE

Internal State
IF
N

Saw an object Go straight


ahead, and turned
right, and it’s now
clear ahead
Saw an object on Halt
my right, turned
right, and object
ahead again
See no objects ahead Go straight

See an object ahead Turn randomly


3 January 2024
Example Reflex Agent With Internal State:
Wall-Following

start

Actions: left, right, straight, open-


door
Rules:
1. If open(left) & open(right) and open(straight) then
choose randomly between right and left
2. If wall(left) and open(right) and open(straight) then straight
3. If wall(right) and open(left) and open(straight) then straight
4. If wall(right) and open(left) and wall(straight) then left
5. If wall(left) and open(right) and wall(straight) then right
6. If wall(left) and door(right) and wall(straight) then open-door
7. If wall(right) and wall(left) and open(straight) then straight.
3 January 2024 8. (Default) Move randomly
Model-Based Reflex Agent Program
• application of simple rules to situations

function REFLEX-AGENT-WITH-STATE(percept) returns action


static: rules //set of condition-action rules
state //description of the current world state
action //most recent action, initially none

state := UPDATE-STATE(state, action, percept)


rule := RULE-MATCH(state, rules)
action := RULE-ACTION[rule]
r3 eJantuaruy 2r024n
Goal-based agents

• Current state of the environment is always not enough


• The goal is another issue to achieve
• Judgment of rationality / correctness

• Actions chosen goals, based on


• the current state
• the current percept

3 January 2024
Goal-based agents
• Conclusion
• Goal-based agents are less efficient
• but more flexible
• Agent Different goals different tasks

• Search and planning


• two other sub-fields in AI

• to find out the action sequences to achieve its goal

• Used in various applications, such as robotics, computer


vision, and natural language processing.

3 January 2024
Goal-based agents

3 January 2024
Utility-based agents
• Goals alone are not enough
• to generate high-quality behavior
• E.g. meals in Canteen, good or not ?

• Many action sequences of the goals


• some are better and some worse
• If goal means success,
• then utility means the degree of success (how

successful it is)
3 January 2024
Utility-based agents

3 January 2024
Utility-based agents
• itis said state A has higher utility
• If state A is more preferred than others

• Utility is therefore a function


• that maps a state onto a real number

• the degree of success

3 January 2024
Utility-based agents (3)
• Utilityhas several advantages:
• When there are conflicting goals,
• Only some of the goals but not all can be achieved

• utility describes the appropriate trade-off

• When there are several goals


• None of them are achieved certainly

• utility provides a way for the decision-making

3 January 2024
Agents
Some general features characterizing agents:
• Autonomy
• goal-orientedness
• collaboration
• flexibility
• ability to be self-starting
• temporal continuity
• character
• adaptiveness
• mobility
• capacity to learn.
3 January 2024
Classification of agents
🔾 Interface Agents
AI techniques to provide assistance to the user
🔾 Mobile agents
capable of moving around networks gathering information
🔾 Co-operative agents
communicate with, and react to, other agents in a multi-agent
systems within a common environment
🔾 Reactive agents
“reacts” to a stimulus or input that is governed by some state or
event in its environment

3 January 2024
Distributed Computing Agents
🔾 Common learning goal (strong sense)
🔾 Separate goals but information sharing (weak
sense)

3 January 2024
Learning Agents
• After an agent is programmed, can it work
immediately?
• No, it still need teaching
• In AI,
• Once an agent is done
• We teach it by giving it a set of examples
• Test it by using another set of examples
• We then say the agent learns
• A learning agent

3 January 2024
Learning Agents
• Four conceptual components
• Learning element
Making improvement

• Performance element
• Selecting external actions
• Critic
• Tells the Learning element how well the agent is doing with respect to fixed
performance standard.
(Feedback from user or examples, good or not?)
• Problem generator
• Suggest actions that will lead to new and informative experiences.

3 January 2024
Learning agents

3 January 2024
Automated taxi example
• Performance element-whatever collection of
knowledge and procedures the taxi has for selecting
its driving actions.
• The taxi goes out on the road and drives, using this
performance element

3 January 2024
Automated taxi example
• Critic-observes the world and passes information along
to the learning element.
• For example, after the taxi makes a quick left
turn across three lanes of traffic, the critic observes
the shocking language used by other drivers.
• From this experience, the learning element is able to
formulate a rule saying this was a bad action, and the
performance element is modified by installation of the
new rule.
3 January 2024
Automated taxi example
• The problem generator- might identify certain areas of
behavior in need of improvement and suggest experiments,
such as trying out the brakes on different road surfaces under
different conditions.
• The Learning element-if the taxi exerts a certain braking
pressure when driving on a wet road, then it will soon find out
how much deceleration is actually achieved.
• Clearly, these two learning tasks are more difficult if the
environment is only partially observable.

3 January 2024
Automated taxi example
• The situation is slightly more complex for a utility-based agent
that wishes to learn utility information. For example, suppose
the taxi-driving agent receives no tips from passengers who
have been thoroughly shaken up during the trip.
• The external performance standard must inform the agent
that the loss of tips is a negative contribution to its overall
performance; then the agent might be able to learn that
violent maneuvers do not contribute to its own utility.

3 January 2024
Automated taxi example
• In a sense, the performance standard distinguishes part
of the incoming percept as a reward (or penalty) that
provides direct feedback on the quality of the agent’s
behavior.
• Hard-wired performance standards such as pain and
hunger in animals can be understood in this way.

3 January 2024
How the components of agent programs
work?
• Question for a student of AI is, “How on earth do these components
work?

3 January 2024
How the components of agent programs
work?
• Agent’s organization
a) Atomic Representation: In this representation, each state of
the
world is a black box that has no internal structure. E.g., finding each
state is a city. AI algorithms: search, games, Markov decision processes,
hidden Markov models, etc.
b) Factored Representation: In this representation, each state
has
some attribute value properties. E.g., GPS location, amount of gas in
the tank. AI algorithms: constraint satisfaction, and Bayesian networks.
c) Structured Representation: Relationships between the objects of a
state can be explicitly expressed. AI algorithms: first-order logic,
knowledge-based learning, natural language understanding.
The major points to recall are as follows:
• An agent is something that perceives and acts in an environment.
• The agent function for an agent specifies the action taken by the
agent in response to any percept sequence.
• The performance measure evaluates the behavior of the agent in an
environment.
• A rational agent acts so as to maximize the expected value of the
performance measure, given the percept sequence it has seen so far.
• A task environment specification includes the performance measure,
the external environment, the actuators, and the sensors. In
designing an agent, the first step must always be to specify the task
environment as fully as possible.
The major points to recall are as follows:
• The agent program implements the agent function. There exists
variety
a of basic agent-program designs reflecting the kind of
information made explicit and used in the decision process.
• The designs vary in efficiency, compactness, and flexibility. The
appropriate design of the agent program depends on the nature of
the environment.
• Simple reflex agents respond directly to percepts, whereas model-
based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept.
• Goal-based agents act to achieve their goals, and utility-based
agents try to maximize their own expected “happiness.”
• All agents can improve their performance through learning.

You might also like