Agents - states
•Atomic
• Factored
• Structured
A B
…...
…
……..
Atomic: each state of the world is indivisible. No internal
structure
Ex: finding a path from Kashmir to Kanyakumari via some
sequence of cities.
State of the world – name of the city we are in – discernible
property, identical or different from other state.
Algorithms underlying search, game playing, hidden Markov
models, Markov decision processes work with atomic state
representations
Factored: State consists of a vector of attribute values.
Constraint satisfaction algorithms, propositional logic, planning,
Bayesian Networks, Machine Learning Algorithms work with
Factored state representations
Structured: State includes objects each of which may have
attributes of its own as well as relationships to other objects
First order logic, first order probability models, knowledge based
learning, NLP
2.
Problem solving assearch
how the problem of an agent deciding what to do can be cast as
the problem of searching to find a path in a graph
the agent has a state-based model of the world, with no uncertainty and with goals to achieve
The agent can determine how to achieve its goals by
searching in its representation of the world state space for a way
to get from its current state to a goal state.
It can find a sequence of actions that will achieve its goal before it has to act in the world.
This problem can be abstracted to the mathematical problem of finding a
path from a start node to a goal node in a directed graph.
3.
Problem solving assearch
the agent constructs a set of potential partial solutions to a problem that
can be checked to see if they truly are solutions or if they could lead to solutions.
Search proceeds by repeatedly selecting a partial solution,
stopping if it is a path to a goal, and
otherwise extending it by one more arc in all possible ways.
When an agent is given a problem, it is usually given only a description that lets it recognize a solution,
not an algorithm to solve it. It has to search for a solution.
humans are able to use intuition to jump to solutions to difficult problems
difficulty of search and the fact that humans are able to solve some search problems efficiently suggests that
computer agents should exploit knowledge about special cases to guide them to a solution.
This extra knowledge beyond the search space is heuristic knowledge.
4.
State Spaces
One generalformulation of intelligent action is in terms of state space.
A state contains all of the information necessary to predict the effects of an action
And to determine if it is a goal state.
State-space searching assumes that
• the agent has perfect knowledge of the state space and can observe what state it is in (i.e., there is full
observability)
• the agent has a set of actions that have known deterministic effects;
• some states are goal states, the agent wants to reach one of these goal states,
and the agent can recognize a goal state
• a solution is a sequence of actions that will get the agent from its current state to a goal state
5.
State Spaces
Example:
Consider therobot delivery domain and the task of finding a path from one location to another.
This can be modeled as a state space search problem, where the states are locations.
Assume that the agent can use a lower-level controller to carry out the high-level action of getting from one
location to a neighboring location.
Thus, at this level of abstraction, the actions can involve deterministic traveling between neighboring locations.
An example problem is where the robot is outside room r103, at position o103,
and the goal is to get to room r123.
A solution is a sequence of actions that will get the robot to room r123.
state-space problem
A state-spaceproblem consists of
• a set of states;
• a distinguished set of states called the start states;
• a set of actions available to the agent in each state;
• an action function that, given a state and an action, returns a new state;
• a set of goal states, often specified as a Boolean function, goal(s), that is true when s is a goal state; and
• a criterion that specifies the quality of an acceptable solution. For example,
any sequence of actions that gets the agent to the goal state may be acceptable, or
there may be costs associated with actions and the agent may be required to find a sequence that has minimal
total cost --- an optimal solution.
Alternatively, it may be satisfied with any solution that is within 10% of optimal.
8.
Graph Searching
abstract thegeneral mechanism of searching
in terms of searching for paths in directed graphs.
To solve a problem, first define the underlying search space and then
apply a search algorithm to that search space.
Many problem-solving tasks can be transformed into the problem of finding a path in a graph.
Searching in graphs provides an appropriate level of abstraction within which to study simple problem solving
independent of a particular domain.
A (directed) graph consists of a set of nodes and a set of directed arcs between nodes.
The idea is to find a path along these arcs from a start node to a goal node.
A directed graph consists of
• a set N of nodes and
• a set A of ordered pairs of nodes called arcs.
There can be infinitely many nodes and arcs. The graph need not be represented explicitly; only a procedure is
required that can generate nodes and arcs as needed.
9.
Goal based Agent– Problem Solving Agent
• Uses Atomic representations for the states of the world
• Problems/ Task environment - simple
• Solutions – Agents –
• General purpose search algorithms –
• Uninformed
• Informed
• A fixed sequence of actions
• Intelligent agents are supposed to maximize their performance measure
• Achieving the above is simplified if the agent can adapt a goal and aim at satisfying it.
• Goals help organize behaviour by limiting the objectives the agent is trying to achieve, hence the actions it needs
to consider
• GOAL FORMULATION
• The first step of problem solving, based on the current situation and the agents performance measure
• Goal – a set of world states – exactly those states in which the goal is satisfied
• Agent’s task is to find out how to act, now and in the future, so that it reaches a goal state
• Before it can do this, agent needs to decide what sorts of actions and states it should consider
10.
Problem Solving agent
•What level of actions?
• Move left foot forward an inch?, turn the steering wheel one degree left?
• Too much of uncertainty in the world at this level of detail, and there would be many steps in the solution
• PROBLEM FORMULATION
• The process of deciding what actions and states to consider, given a goal
• Agent may consider actions at the level of driving from one major town to another
• Each state, therefore, corresponds to ‘being in a particular town’
11.
Example – DrivingIn Romania
Goal: Drive to Bucharest
Current State: Arad
3 roads from Arad
Arad Zerind
Arad Sibiu
Arad Timisoara
None achieve the goal!
12.
Example
• Environment
• Unlessthe agent is familiar with the geography of Romania, it will not know which road to follow
• Agent will not know which of its possible actions is best, as it does not yet know enough about the state that results from
taking each action
• If the agent has no additional information – if the environment is unknown – then agent has no choice but to try one of the
actions at random!
• Map of Romania is with the agent.
• Map provides agent with information about the states it might get itself into and the actions it can take
• Agent can use this information to consider subsequent stages of a hypothetical journey via each of the three towns trying
to find a journey that will eventually gets to Bucharest
• Once it has found a path on the map, from Arad to Bucharest, it can achieve its goal by carrying out the driving actions
that correspond to the legs of the journey.
• An agent with several immediate options of unknown value, can decide what to do by first examining future actions that
eventually lead to states of known value
• Examining future actions means – we need to be more specific of the environment – assume environment is
Observable, so that agent always known the current state - each city has some indication to let agent know it presence
when driver arrives at that place.
• The environment is assumed to be Discrete
• At any given state, there are only finitely many actions to choose from – in navigating in Romania, each city is connected to
a small number of other cities
13.
Problem Solving Agent
•Assumptions
• Assume environment is Known – agent knows which states are reached by each action
• Assume the environment is deterministic – each action has exactly one outcome – means if the agent chooses to
drive from Arad to Sibiu, it does end up at Sibiu.
• Under these assumptions the solution to any problem is a fixed sequence of actions
“If the agent knows the initial state, and the environment is known and deterministic, it knows exactly where it
will be after first action, and what it will perceive. Since only one percept is possible after first action, the
solution can specify only one possible second action, and so on”
• The process of looking for a sequence of actions that reaches the goal is called SEARCH
• A search algorithm takes the problem as input and returns a solution in the form of action sequence.
• Once a solution is found, the actions it recommends can be carried out – Execution Phase
• FORMULATE – SEARCH – EXECUTE design for the agent
• After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. Then, the agent
uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do.
14.
Problem Solving Agent
FunctionSimple-Problem-Solving-Agent (percept) returns an action
persistent: sequence, an action sequence, initially empty
state, some description of the current world state
goal, a gaol initially null
problem, a problem formulation
state Update-State (state, percept)
if sequence is empty then
goal Formulate-Goal (state)
problem Formulate-Problem (state, goal)
sequence Search ( problem)
if sequence = failure then return NULL action
action First (sequence)
sequence rest (sequence)
return action