Solving Problem by Searching
Problem Solving
Problem:
◦ A goal and set of means for achieving the goal is called a
problem
Search:
◦ The process of exploring what the means can do is called a
search
2
Problem Solving Agents
Problem solving agent
◦ A kind of “goal based” agent
◦ Finds sequences of actions that lead to desirable states.
Intelligent agents are supposed to act in such a way that
the environment goes through a sequence of states that
maximize the performance measure
3
Problem-Solving Agent
sensors
?
environment
agent
actuators
• Formulate Goal
• Formulate Problem
•States
•Actions
• Find Solution
Goal Formulation
Goals help to organize the behavior by limiting the
objective that the agent is trying to achieve.
Goal formulation is the first step in problem solving
As well as formulating a goal, the agent may wish to
decide on some other factors that affect the
desirability of different ways of achieving the goal
Problem Formulation
Problem formulation is the process of deciding that
what actions and states to consider, and follows goal
formulation
◦ State: a state is the representation of the elements of the given
problem
◦ Actions: can be viewed as causing transition between states
6
Search, Solution & Execution
Search: Examining different possible sequences of
actions that lead to the state of known value and
choose the best one, the process of looking such a
sequence is search
Solution: form of action sequence to reach the goal
Execution: the actions that the solution recommends
can be carried out in this phase
Vacuum World Example
9
Problem Formulation
Problem
◦ Collection of information that the agent will use to decide
what to do
Initial State
◦ The starting state of the problem, defined in a suitable manner
Operator
◦ An action or a set of actions that moves the problem from one
state to another
◦ Given a particular state x, S(x) returns the set of states
reachable from x by any single action
State Space
◦ The set of all states reachable from the initial state by any
sequence of actions
Problem Formulation
Path
◦ A path in a state space is simply any sequence of actions
leading from one state to another
Goal Test
◦ A test applied to a state which returns true if we have reached
a state that solves the problem
Path Cost
◦ A path cost function is a function that assigns a cost to the
path
Solution
◦ Path from the initial state to the state that satisfies the goal test
Problem Formulation
State Set Space
◦ The initial state set and the successor function define the state
space set of the problem OR the set of all states reachable
from the initial state set
Abstraction
◦ The process of removing the detail from the representation is
called the abstraction
12
Measuring Problem Solving Performance
The effectiveness of a search can be measured in at
least three ways:
◦ It finds a solution at all
◦ Is it a good solution? (one with low path cost)
◦ Search cost associated with time and memory required to find a
solution
The total cost of search is the sum of path cost and the
search cost
The agents must somehow decide what resources to
devote to search and what resources to devote to
execution (for example, in large complicated problems,
there is trade off to be made, the agent can search for a
very long time to get an optimal solution and vice
versa)
Example: The 8-puzzle
States: locations of tiles
Actions/Operators: move blank left, right, up, down
goal test: goal state (given)
path cost: 1 per move
Example
You have a program that outputs the message “illegal input record” when fed a
certain file of input records. You know that processing of each record is
independent of the other records. You want to discover what record is illegal.
Initial State
Considering all input records
Goal Test
Considering a single record, and it gives “illegal input” message.
Successor Function
Run again on the first half of the records
Run again on the second half of the records
Note: your choice of move depends on whether a run gives the error or not.
Cost Function
Number of runs
Search strategies
The choice of which state to expand first is
determined by the search strategy
Think search process as building up a search tree.
The root of the search tree is a search node
corresponding to the initial state
The leaf node of the tree correspond to states that do
not have successors in the tree, either because they
have not expanded yet, or because they were
expanded, but generated the empty set.
Search Trees
A tree is a graph that:
1. is connected but becomes disconnected on removing any
edge
2. is connected and acyclic
3. has precisely one path between any two nodes
Property 3, unique paths, makes them much easier to
search and so we will start with search on (rooted)
trees and leaf nodes of the tree correspond to the state
that are not expanded further
Node Data Structure
STATE
PARENT
ACTION
COST
DEPTH
Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search
tree includes state, parent node, action, path cost g(x),
depth
Fringe or Frontier: Collection of nodes that are waiting
to be expanded is called fringe or frontier
20
Search strategies
A search strategy is defined by picking the order of
node expansion
Strategies are evaluated along the following
dimensions:
◦ completeness: does it always find a solution if one exists?
◦ time complexity: number of nodes generated
◦ space complexity: maximum number of nodes in memory
◦ optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
◦ b: maximum branching factor of the search tree
◦ d: depth of the least-cost solution
◦ m: maximum depth of the state space (may be ∞)
Uninformed search strategies
Uninformed search strategies use only the information
available in the problem definition
They have no information about the number of steps
or the path cost from the current state to the goal
◦ Breadth-first search
◦ Uniform-cost search
◦ Depth-first search
◦ Depth-limited search
◦ Iterative deepening search
22
Breadth-first search
In this strategy, the root node is expanded first, then
all the nodes generated by the root node are expanded
next, and then their successors, and so on.
In general, all the nodes at depth d in the search tree
are expanded before the nodes at depth d + 1.
If there is a solution, breadth-first search is guaranteed
to find it.
If there are several solutions, breadth-first search will
always find the shallowest goal state first.
Breadth-first search
Expand shallowest unexpanded node
Implementation:
◦ fringe is a FIFO queue, i.e., new successors go at end
Breadth-first search
Breadth-first search
Properties of Breadth-First Search
Complete
◦ Yes if b (max branching factor) is finite
Time
◦ 1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
◦ exponential in d
Space
◦ O(bd+1)
◦ Keeps every node in memory
◦ This is the big problem; an agent that generates nodes at 10
MB/sec will produce 860 MB in 24 hours
Optimal
◦ Yes (if cost is 1 per step); not optimal in general
Depth-First Search
Depth-first search always expands one of the nodes at
the deepest level of the tree.
Only when the search hits a dead end (a non goal node
with no expansion) does the search go back and
expand nodes at shallower levels.
This strategy can be implemented by GENERAL-
SEARCH with a queuing function that always puts the
newly generated states at the front of the queue.
Because the expanded node was the deepest, its
successors will be even deeper and are therefore now
the deepest.
Depth-First Search
The drawback of depth-first search is that it can get stuck
going down the wrong path.
Many problems have very deep or even infinite search
trees, so depth-first search will never be able to recover
from an unlucky choice at one of the nodes near the top of
the tree.
The search will always continue downward without
backing up, even when a shallow solution exists.
Thus, on these problems depth-first search will either get
stuck in an infinite loop and never return a solution, or it
may eventually find a solution path that is longer than the
optimal solution.
That means depth-first search is neither complete nor
optimal.
Depth-First Search
30
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Depth-First Search
Complete
◦ No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated spaces along path
◦ Yes: in finite spaces
Time
◦ O(bm)
◦ Not great if m is much larger than d
◦ But if the solutions are dense, this may be faster than breadth-
first search
Space
◦ O(bm)…linear space
Optimal
◦ No
Depth-Limited Search
Avoid the pitfalls of depth-first search by imposing a
cut-off on the maximum depth of the path
A variation of depth-first search that uses a depth limit
◦ Alleviates the problem of unbounded trees
◦ Search to a predetermined depth l
◦ Nodes at depth l have no successors
Same as depth-first search if l = ∞
Depth-Limited Search
Complete
◦ Yes if l < d
Time
◦ O(bl)
Space
◦ O(bl)
Optimal
◦ No if l > d
Iterative Deepening Search
Iterative deepening depth-first search
◦ Uses depth-first search
◦ Finds the best depth limit
Gradually increases the depth limit; 0, 1, 2, … until a goal
is found
◦ Combines the benefit of breadth first and depth first search
◦ It is optimal and complete, like breadth-first search, but has
only the modest memory requirements of depth-first search.
Iterative Deepening Search
47
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
Iterative Deepening Search
Complete
◦ Yes
Time
◦ O(bd)
Space
◦ O(bd)
Optimal
◦ Yes if step cost = 1
◦ Can be modified to explore uniform cost tree
Iterative Deepening Search
In general, iterative deepening search is the preferred
uninformed search method when there is a large
search space and the depth of the solution is not
known