Module 1
Module 1
INTELLIGENCE
Dr. Kavipriya G
Assistant Professor Senior grade 1
Scope- VIT Chennai
Cabin details: AB3 First Floor Annexure 109 A cabin no 24
COURSE OBJECTIVES
Reference Books
TOPICS
• Introduction- Evolution of AI, State of Art -Different Types of
Artificial Intelligence-Applications of AI-Subfields of AI-Intelligent
Agents- Structure of Intelligent Agents-Environments
INTRODUCTION
• What is Intelligence?
• The ability to acquire and apply knowledge and skills.
ARTIFICIAL INTELLIGENCE
Artificial Intelligence:
Artificial Intelligence, is the ability of a computer
to
act like a human being.
Introduction to AI / What is Intelligence?
● Intelligence, taken as a whole, consists of
the following skills:- the capacity to learn and solve
problems
• 1952-1969
• GPS- Newell and Simon
• Geometry theorem prover - Gelernter (1959)
• Samuel Checkers that learns (1952)
• McCarthy - Lisp (1958), Advice Taker, Robinson’s
resolution
• Microworlds: Integration, block-worlds.
• 1962- the perceptron convergence (Rosenblatt)
History of Artificial Intelligence
• 1950: Alan Turing publishes Computing Machinery and Intelligence.
• 1956: John McCarthy coins the term 'artificial intelligence' at the first-ever AI conference at
Dartmouth College.
• 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural
network that 'learned' though trial and error.
• 1980s: Neural networks which use a backpropagation algorithm to train itself
become widely used in AI applications.
• 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match
(and rematch).
• 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
• 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a
convolutional neural network to identify and categorize images with a higher rate of
accuracy than the average human.
• 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol,
the world champion Go player, in a five-game match
Shakey-AI First Robot-1960s
4 Categories of Definition
for
• AI that act like
Systems
humans
• Systems that think like
humans
• Systems that think rationally
Systems that
• Systems thatthink Systems that think
act rationally
like humans rationally
• Cognitive Science
• Is an interdisciplinary science
Limited Rationality
• acting appropriately when there is not enough time to do all
computation
STATE OF THE ART
Robotic vehicles
• A driverless robotic car named STANLEY sped through
the rough terrain of the Mojave desert at 22 mph, finishing the
132- mile course first to win the 2005 DARPA Grand Challenge
2004: Barstow, CA, to Primm, NV
pause/disable command
Wireless E-Stop
Laser 1 interface
RDDF corridor (smoothed and original) driving mode
Laser 2 interface
map trajectory
Laser 5 interface Laser mapper VEHICLE
Camera interface Vision mapper
vision map INTERFACE
obstacle list Steering control
Radar interface Radar mapper
vehicle state (pose, velocity) Touareg interface
vehicle
GPS position
state Throttle/brake
UKF Pose
control Power server
estimation
vehicle state (pose, velocity) interface
GPS compass
Brake/steering
heart beats Linux processes start/stop emergency stop
health status
Process controller Health monitor
power on/off
data
to take
at any point in time
• The agent program runs on the physical architecture to
produce agent function f
• Agent = Architecture + Program
Vacuum-cleaner world
Performance
measure:
•?
Environment:
•?
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
•?
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians,
customers
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians,
customers
Actuators:
• steering, accelerator, brake, signal,
horn
Sensors:
PEAS: Specifying an automated taxi driver
Performance measure:
• safe, fast, legal, comfortable, maximize
profits
Environment:
• roads, other traffic, pedestrians,
customers
Actuators:
• steering, accelerator, brake, signal,
horn
Sensors:
PEAS model - Activity
• Google Voice Assistant
• Smart Fitness Tracker
• Smart Refrigerator
• Home Security System
Properties of task environments
noise or unknown
Properties of task environments
• Sequential
• Current action may affect all future decisions
.Eg. Taxi driving and chess
Properties of task environments
• Semidynamic
• environment is not changed over time
• but the agent’s performance score does
Properties of task environments
• Avoiding collision
Properties of task environments
• Four types
• Simple reflex agents
• Goal-based agents
• Utility-based agents
Reflex Agent Diagram
Sensors
Environment
What the world is like
now
Condition-action
rules What should I do now
Agent
Actuators
Reflex Agent Diagram 2
Sensors
What the world is like
now
Condition-action
rules What should I do now
Agent Actuators
3 January 2024
Environment
Reflex Agent Program
•application of simple rules to situations
function SIMPLE-REFLEX-AGENT(percept)
returns action
static: rules //set of condition-
action rules
condition := INTERPRET-INPUT(percept)
rule := RULE-MATCH(condition,
rules) action := RULE-ACTION(rule)
3 January 2024
Simple reflex agents
3 January 2024
Simple reflex agents
3 January 2024
A Simple Reflex Agent in
Nature
percepts
(size, motion)
RULES:
(1) If small moving object,
then activate
SNAP
(2) If large moving object,
then activate
AVOID and inhibit
SNAP
ELSE (not moving) then
needed for
NOOP
completen Action: SNAP or AVOID or NOOP
3 January 2024
ess
Model-based Reflex Agents
• For the world that is partially observable
• theagent has to keep track of an internal state
• That depends on the percept history
3 January 2024
Model-based reflex agents
3 January 2024
Example Table
Agent With
THE
Internal State
IF
N
start
3 January 2024
Goal-based agents
• Conclusion
• Goal-based agents are less efficient
• but more flexible
• Agent Different goals different tasks
3 January 2024
Goal-based agents
3 January 2024
Utility-based agents
• Goals alone are not enough
• to generate high-quality behavior
• E.g. meals in Canteen, good or not ?
successful it is)
3 January 2024
Utility-based agents
3 January 2024
Utility-based agents
• itis said state A has higher utility
• If state A is more preferred than others
3 January 2024
Utility-based agents (3)
• Utilityhas several advantages:
• When there are conflicting goals,
• Only some of the goals but not all can be achieved
3 January 2024
Agents
Some general features characterizing agents:
• Autonomy
• goal-orientedness
• collaboration
• flexibility
• ability to be self-starting
• temporal continuity
• character
• adaptiveness
• mobility
• capacity to learn.
3 January 2024
Classification of agents
🔾 Interface Agents
AI techniques to provide assistance to the user
🔾 Mobile agents
capable of moving around networks gathering information
🔾 Co-operative agents
communicate with, and react to, other agents in a multi-agent
systems within a common environment
🔾 Reactive agents
“reacts” to a stimulus or input that is governed by some state or
event in its environment
3 January 2024
Distributed Computing Agents
🔾 Common learning goal (strong sense)
🔾 Separate goals but information sharing (weak
sense)
3 January 2024
Learning Agents
• After an agent is programmed, can it work
immediately?
• No, it still need teaching
• In AI,
• Once an agent is done
• We teach it by giving it a set of examples
• Test it by using another set of examples
• We then say the agent learns
• A learning agent
3 January 2024
Learning Agents
• Four conceptual components
• Learning element
Making improvement
•
• Performance element
• Selecting external actions
• Critic
• Tells the Learning element how well the agent is doing with respect to fixed
performance standard.
(Feedback from user or examples, good or not?)
• Problem generator
• Suggest actions that will lead to new and informative experiences.
3 January 2024
Learning agents
3 January 2024
Automated taxi example
• Performance element-whatever collection of
knowledge and procedures the taxi has for selecting
its driving actions.
• The taxi goes out on the road and drives, using this
performance element
3 January 2024
Automated taxi example
• Critic-observes the world and passes information along
to the learning element.
• For example, after the taxi makes a quick left
turn across three lanes of traffic, the critic observes
the shocking language used by other drivers.
• From this experience, the learning element is able to
formulate a rule saying this was a bad action, and the
performance element is modified by installation of the
new rule.
3 January 2024
Automated taxi example
• The problem generator- might identify certain areas of
behavior in need of improvement and suggest experiments,
such as trying out the brakes on different road surfaces under
different conditions.
• The Learning element-if the taxi exerts a certain braking
pressure when driving on a wet road, then it will soon find out
how much deceleration is actually achieved.
• Clearly, these two learning tasks are more difficult if the
environment is only partially observable.
3 January 2024
Automated taxi example
• The situation is slightly more complex for a utility-based agent
that wishes to learn utility information. For example, suppose
the taxi-driving agent receives no tips from passengers who
have been thoroughly shaken up during the trip.
• The external performance standard must inform the agent
that the loss of tips is a negative contribution to its overall
performance; then the agent might be able to learn that
violent maneuvers do not contribute to its own utility.
3 January 2024
Automated taxi example
• In a sense, the performance standard distinguishes part
of the incoming percept as a reward (or penalty) that
provides direct feedback on the quality of the agent’s
behavior.
• Hard-wired performance standards such as pain and
hunger in animals can be understood in this way.
3 January 2024
How the components of agent programs
work?
• Question for a student of AI is, “How on earth do these components
work?
3 January 2024
How the components of agent programs
work?
• Agent’s organization
a) Atomic Representation: In this representation, each state of
the
world is a black box that has no internal structure. E.g., finding each
state is a city. AI algorithms: search, games, Markov decision processes,
hidden Markov models, etc.
b) Factored Representation: In this representation, each state
has
some attribute value properties. E.g., GPS location, amount of gas in
the tank. AI algorithms: constraint satisfaction, and Bayesian networks.
c) Structured Representation: Relationships between the objects of a
state can be explicitly expressed. AI algorithms: first-order logic,
knowledge-based learning, natural language understanding.
The major points to recall are as follows:
• An agent is something that perceives and acts in an environment.
• The agent function for an agent specifies the action taken by the
agent in response to any percept sequence.
• The performance measure evaluates the behavior of the agent in an
environment.
• A rational agent acts so as to maximize the expected value of the
performance measure, given the percept sequence it has seen so far.
• A task environment specification includes the performance measure,
the external environment, the actuators, and the sensors. In
designing an agent, the first step must always be to specify the task
environment as fully as possible.
The major points to recall are as follows:
• The agent program implements the agent function. There exists
variety
a of basic agent-program designs reflecting the kind of
information made explicit and used in the decision process.
• The designs vary in efficiency, compactness, and flexibility. The
appropriate design of the agent program depends on the nature of
the environment.
• Simple reflex agents respond directly to percepts, whereas model-
based reflex agents maintain internal state to track aspects of the
world that are not evident in the current percept.
• Goal-based agents act to achieve their goals, and utility-based
agents try to maximize their own expected “happiness.”
• All agents can improve their performance through learning.