KEMBAR78
problem solving in Artificial intelligence .pdf
1
A.I.: Solving problems by
searching
Chapter III: The Wrath of
Exponentially-Large State
Spaces
2
Outline
 Problem-solving agents
 Problem types
 Problem formulation
 Example problems
 Basic search algorithms
3
Overview
 Recall our previous discussion of reflex agents. Such agents cannot operate well
in environments for which the state to action mapping is too large to store or
would take too long to learn.
 Problem-solving agents use atomic representations (see Chapter 2), where
states of the world are considered as wholes, with no internal structure visible to
the problem-solving agent.
 We consider two general classes of search: (1) uninformed search algorithms for
which the algorithm is provided no information about the problem other than its
definition; (2) informed search, where the algorithm is given some guidance.
4
Overview
 Intelligent agents are supposed to maximize their performance measure;
achieving this is sometimes simplified if the agent can adopt a goal and aim to
satisfy it.
 Goals help organize behavior by limiting objectives. We consider a goal to be a set
of world states – exactly those states in which the goal is satisfied.
 Here we consider environments that are known, observable, discrete and
deterministic (i.e. each action has exactly one corresponding outcome).
 The process of looking for a sequence of actions that reaches the goal is called
search. A search algorithm takes a problem as input and returns a solution in
the form of an action sequence.
5
Well-defined problems and solutions
 A problem can be defined formally by (5) components:
 (1) The initial state from which the agent starts.
 (2) A description of possible actions available to the agent: ACTIONS(s)
 (3) A description of what each action does, i.e. the transition model, specified by
a function RESULT (s,a)=a’.
 Together, the initial state, actions and transition model implicitly defined the state
space of the problem – the set of all states reachable from the initial state by any
sequence of actions.
 The state space forms a directed network or graph in which the nodes are states
and the edges between nodes are actions. A path in the state space is a sequence
of states connected by a sequence of actions.
6
Well-defined problems and solutions
 (4) The goal test, which determines whether a given state is a goal state.
Frequently the goal test is intuitive (e.g. check if we arrived at the destination) –
but note that it is also sometimes specified by an abstract property (e.g. “check
mate”).
 (5) A path cost function that assigns a numeric cost to each path. The problem-
solving agent chooses a cost function that reflects its own performance measure.
 Commonly (but not always), the cost of a path is additive in terms of the
individual actions along a path. Denote the step cost to take action ‘a’ in state s,
arriving in s’ as: c(s,a,s’).
 A key element of successful problem formulation relates to abstraction – the
process of removing (inessential) details from a problem representation.
7
Problem-solving agents
8
Example: Romania
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest

 Formulate goal:
 be in Bucharest

 Formulate problem:
 states: various cities
 actions: drive between cities

 Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
9
Example: Romania
10
Problem types
 Deterministic, fully observable  single-state problem
 Agent knows exactly which state it will be in; solution is a sequence

 Non-observable  sensorless problem (conformant problem)
 Agent may have no idea where it is; solution is a sequence

 Nondeterministic and/or partially observable  contingency
problem
 percepts provide new information about current state
 often interleave search, execution

 Unknown state space  exploration problem
11
Example: vacuum world
 Single-state, start in #5.
Solution?

12
Example: vacuum world
 Single-state, start in #5.
Solution? [Right, Suck]

 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?

13
Example: vacuum world
 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
 Contingency
 Nondeterministic: Suck may
dirty a clean carpet
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7
Solution?
14
Example: vacuum world
 Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]

 Contingency
 Nondeterministic: Suck may
dirty a clean carpet
 Partially observable: location, dirt at current location.
 Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
15
Selecting a state space
 Real world is absurdly complex
 state space must be abstracted for problem solving
 (Abstract) state = set of real states
 (Abstract) action = complex combination of real actions
 e.g., "Arad  Zerind" represents a complex set of possible routes,
detours, rest stops, etc.
 For guaranteed realizability, any real state "in Arad“ must get to
some real state "in Zerind"

 (Abstract) solution =
 set of real paths that are solutions in the real world
 Each abstract action should be "easier" than the original
problem
16
Vacuum world state space graph
 States: integer dirt and robot location.
 Actions: Left, Right, Suck.
 Goal test: no dirt at all locations.
 Path cost: 1 per action.
17
Example: The 8-puzzle
 States: Locations of tiles. (Q: How large is state space?)
 Actions: Move blank left, right, up, down.
 Goal test: s==goal state. (given)
 Path cost: 1 per move.
[Note: optimal solution of n-Puzzle family is NP-hard]
18
Example: 8-queens
 States: Any arrangement of 0 to 8 queens on the
board.
 Actions: Add a queen to any empty square.
 Transition Model: Returns the board with a queen added to the
specified square.
 Goal Test: 8 queens on the board, none attacking.
 Q: How many possible sequences? (~1.8x1014)
 Revision to problem: arrange queens with one per column, in the
leftmost n columns, with no queen attacking another.
 Actions: Add a queen to any square in leftmost empty column (with
no attacking) reduction to just 2,057 states!
19
Example: robotic assembly
 States: Real-valued coordinates of robot joint angles parts of the
object to be assembled.
 Actions: Continuous motions of robot joints.
 Goal test: Complete assembly.
 Path cost: Time to execute.
 Other examples: TSP (NP-hard), robot navigation, protein folding
(unsolved).
20
Searching for Solutions
 Recall that a solution is an actions sequence; accordingly, search
algorithms work by considering various possible action sequences.
 The possible action sequences starting at the initial state form a
search tree with the initial state at the root; the branches are actions
and the nodes correspond to state in the state space of the problem.
 To consider taking various actions, we expand the current state –
thereby generating a new set of states.
 In this way, we add branches from the parent node to children
nodes.
21
Searching for Solutions
 A node with no children is called a leaf; the set of all leaf nodes
available for expansion at any given point is called the frontier.
 The process of expanding nodes on the frontier continues until
either a solution is found or there are no more states to expand.
 We consider the general TREE-SEARCH algorithm next.
 All search algorithms share this basic structure; they vary primarily
according to how they choose which state to expand next – the so-
called search strategy.
 In general, a TREE-SEARCH considers all possible paths (including
infinite ones), whereas a GRAPH-SEARCH avoids consideration of
redundant paths.
22
Tree search algorithms
 Basic idea:
 Offline, simulated exploration of state space by generating
successors of already-explored states (a.k.a.~expanding states)
23
Tree search example
24
Tree search example
25
Tree search example
26
Implementation: general tree search
27
Searching for Solutions
 Naturally, if we allow for redundant paths, then a formerly
tractable problem can become intractable” “Algorithms that forget
their history are doomed to repeat it.”
 To avoid exploring redundant paths we can augment the TREE-
SEARCH algorithm with a data structure called the explored set,
which remembers every expanded node (we discard nodes in
explored set instead of adding them to the frontier).
28
Infrastructure for search algorithms
 Search algorithms require a data structure to keep track of the
search tree that is being constructed.
 Foe each node n of the tree, we have a structure with (4)
components:
 (1) n.STATE: the state in the state space to which the node
corresponds.
 (2) n.PARENT: the node in the search tree that generated this
node.
 (3) n.ACTION: the action that was applied to the parent to
generate the node.
 (4) n.PATH-COST: the cost, traditionally denoted g(n), of the
path from the initial state to the node, as indicated by the parent
pointers.
29
Implementation: states vs. nodes
 A state is a (representation of) a physical configuration.
 A node is a data structure constituting part of a search tree includes
state, parent node, action, path cost g(x), depth.
 The Expand function creates new nodes, filling in the various fields
and using the Successor function of the problem to create the
corresponding states.
30
Infrastructure for search algorithms
 Now that we have nodes (qua data structures), we need to put them somewhere.
 We use a queue, with operations:
EMPTY?(queue): returns true only if no elements
POP(queue): removes the first element of the queue and returns it.
INSERT(element, queue): inserts an element and returns the resulting queue.
 Recall that queues are characterized by the order in which the store the inserted
nodes:
FIFO (first-in, first-out): pops the oldest element of the queue.
LIFO (last-in, first-out, i.e. a stack) pops the newest element.
PRIORITY QUEUE: pops element with highest “priority.”
31
Search strategies
 A search strategy is defined by picking the order of node expansion
 Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution when one exists?
 time complexity: number of nodes generated
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)
 The size of the state space graph (|V|+|E|)
32
Search strategies
 In more detail:
 Time and space complexity are always considered with respect to some
measure of the problem difficult (e.g. |V| + |E|).
 In A.I., the state space graph is often represented implicitly by the initial state,
actions and transition model (i.e. we don’t always store it explicitly).
 Search algorithm complexity is frequently expressed in terms of:
b: branching factor (maximum number of successors of any node)
d: depth (of the shallowest goal node)
m: maximum length of any path in the state space.
 To assess the effectiveness of a search algorithm, we can consider just the search
cost, which typically depends on time complexity (and/or memory usage); total
cost combines search cost and path cost of the solution.
33
Uninformed search strategies
 Uninformed search strategies use only the information available in
the problem definition.
 Breadth-first search
 Uniform-cost search
 Depth-first search
 Depth-limited search
 Iterative deepening search
34
Breadth-first search
 BFS is a simple strategy in which the root node is expanded first,
then all the successors of the root node are expanded next, then
their successors, etc.
 BFS is an instance of the general graph-search algorithm in which
the shallowest unexpanded node is chosen for expansion.
 This is achieved by using a FIFO queue for the frontier.
Accordingly, new nodes go to the back of the queue, and old
nodes, which are shallower than the new nodes are expanded
first.
 NB: The goal test is applied to each node when it is generated.
35
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 frontier is a FIFO queue, i.e., new successors go at end
36
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 frontier is a FIFO queue, i.e., new successors go at end
37
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
38
Breadth-first search
 Expand shallowest unexpanded node
 Implementation:
 fringe is a FIFO queue, i.e., new successors go at end
39
Properties of BFS
 Complete? Yes (if b is finite)
 Time? 1+b+b2+b3+… +bd = O(bd)
 Space? O(bd) (keeps every node in memory)
 Optimal? Yes (if cost = 1 per step)
 Space is the bigger problem (more than time)
40
Uniform-cost search
 Expand least-cost unexpanded node
 Implementation:
 frontier = queue ordered by path cost
 Equivalent to breadth-first if step costs all equal
 Complete? Yes, if step cost ≥ ε
 Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
where C* is the cost of the optimal solution
 Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
 Optimal? Yes – nodes expanded in increasing order of g(n)
41
Depth-first search
 DFS always expands the deepest node in the current frontier of the
search tree. The search proceeds immediately to the deepest level fo
the search tree, where the nodes have no successors.
 As those nodes are expended, they are dropped from the frontier, so
then the search “backs up” to the next deepest node that still has
unexplored successors.
 DFS is an instance of the general graph-search algorithm which uses
a LIFO queue. This means that the most recently generated node is
chosen for expansion.
42
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
43
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 fringe = LIFO queue, i.e., put successors at front
44
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
45
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
46
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
47
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
48
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
49
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
50
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
51
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
52
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
53
Depth-first search
 Expand deepest unexpanded node
 Implementation:
 frontier = LIFO queue, i.e., put successors at front
54
Properties of depth-first search
 Complete? No: fails in infinite-depth spaces, spaces with
loops
 Modify to avoid repeated states along path
 complete in finite spaces
 Time? O(bm): terrible if m is much larger than d
 but if solutions are dense, may be much faster than breadth-first
 Space? O(bm), i.e., linear space!
 Optimal? No
55
Depth-limited search
 The failure of DFS in infinite state spaces can be alleviated by
suppling DFS with a pre-determined depth limit l, i.e., nodes at
depth l have no successors.
56
Iterative deepening search
 Iterative deepening search is a general strategy, often used in
combination with DFS tress search, that finds the best depth limit.
 It does this by gradually increasing the limit – first 0, then 1, then
2, and so on – until a goal is found; this will occur when the depth
limit reaches d, the depth of the shallowest goal node.
 Note that iteratively deepening search may seem wasteful because
states are generated multiple times, but this, in fact, turns out not to
be too costly (the reason is that most of the nodes are in the bottom
level for a constant branching factor).
57
Iterative deepening search
 Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
 Number of nodes generated in an iterative deepening search to
depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
 For b = 10, d = 5,
 NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
 Overhead = (123,456 - 111,111)/111,111 = 11%
58
Iterative deepening search
59
Iterative deepening search l =0
60
Iterative deepening search l =1
61
Iterative deepening search l =2
62
Iterative deepening search l =3
63
Properties of iterative deepening search
 Complete? Yes
 Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
 Space? O(bd)
 Optimal? Yes, if step cost = 1
64
Bidirectional search
 The main idea with bidirectional search is to run two simultaneous
searches – one forward from the initial state and the other backward
from the goal – hope that the two searches meet in the middle.
 Key: bd/2+bd/2 << bd.
 Replace goal test with check to see whether frontiers intersect.
 Time complexity (with BFS in both directions): O(bd/2); space
complexity: O(bd/2); space requirement is a serious weakness.
 Also, it is not always a simple matter to “search backward” – goal
state could be abstract.
65
Summary of algorithms
66
Repeated states
 Failure to detect repeated states can turn a linear
problem into an exponential one!

67
Graph search
68
Summary
 Before an agent can start searching for solutions, a goal must be
identified and a well-defined problem formulated.
 Problem formulation usually requires abstracting away real-world
details to define a state space that can feasibly be explored
 A problem consists of (5) parts: initial state, actions, transition
model, goal test function and path cost function.
 The environment of the problem is represented by the state space.
A path through the state space from the initial state to a goal state is
a solution.
 TREE-SEARCH considers all possible paths; GRAPH-SEARCH
avoids consideration of redundant paths.
69
Summary
 Search algorithms are judged on the basis of completeness, optimality,
time complexity and space complexity. Complexity depends on b, the
branching factor in the state space and d, the depth of the shallowest
solution.
 Uniformed search methods have access only the problem definition,
including:
BFS – expands the shallowest nodes first
Uniform-cost search – expands the node with the lowest path cost, g(n).
DFS – expands the deepest unexpanded node first (depth-limited search adds
a depth bound).
Iterative Deepening Search – calls DFS with increasing depth limits until a
goal is found.
Bidirectional Search – can reduce time complexity but not always
applicable.

problem solving in Artificial intelligence .pdf

  • 1.
    1 A.I.: Solving problemsby searching Chapter III: The Wrath of Exponentially-Large State Spaces
  • 2.
    2 Outline  Problem-solving agents Problem types  Problem formulation  Example problems  Basic search algorithms
  • 3.
    3 Overview  Recall ourprevious discussion of reflex agents. Such agents cannot operate well in environments for which the state to action mapping is too large to store or would take too long to learn.  Problem-solving agents use atomic representations (see Chapter 2), where states of the world are considered as wholes, with no internal structure visible to the problem-solving agent.  We consider two general classes of search: (1) uninformed search algorithms for which the algorithm is provided no information about the problem other than its definition; (2) informed search, where the algorithm is given some guidance.
  • 4.
    4 Overview  Intelligent agentsare supposed to maximize their performance measure; achieving this is sometimes simplified if the agent can adopt a goal and aim to satisfy it.  Goals help organize behavior by limiting objectives. We consider a goal to be a set of world states – exactly those states in which the goal is satisfied.  Here we consider environments that are known, observable, discrete and deterministic (i.e. each action has exactly one corresponding outcome).  The process of looking for a sequence of actions that reaches the goal is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence.
  • 5.
    5 Well-defined problems andsolutions  A problem can be defined formally by (5) components:  (1) The initial state from which the agent starts.  (2) A description of possible actions available to the agent: ACTIONS(s)  (3) A description of what each action does, i.e. the transition model, specified by a function RESULT (s,a)=a’.  Together, the initial state, actions and transition model implicitly defined the state space of the problem – the set of all states reachable from the initial state by any sequence of actions.  The state space forms a directed network or graph in which the nodes are states and the edges between nodes are actions. A path in the state space is a sequence of states connected by a sequence of actions.
  • 6.
    6 Well-defined problems andsolutions  (4) The goal test, which determines whether a given state is a goal state. Frequently the goal test is intuitive (e.g. check if we arrived at the destination) – but note that it is also sometimes specified by an abstract property (e.g. “check mate”).  (5) A path cost function that assigns a numeric cost to each path. The problem- solving agent chooses a cost function that reflects its own performance measure.  Commonly (but not always), the cost of a path is additive in terms of the individual actions along a path. Denote the step cost to take action ‘a’ in state s, arriving in s’ as: c(s,a,s’).  A key element of successful problem formulation relates to abstraction – the process of removing (inessential) details from a problem representation.
  • 7.
  • 8.
    8 Example: Romania  Onholiday in Romania; currently in Arad.  Flight leaves tomorrow from Bucharest   Formulate goal:  be in Bucharest   Formulate problem:  states: various cities  actions: drive between cities   Find solution:  sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
  • 9.
  • 10.
    10 Problem types  Deterministic,fully observable  single-state problem  Agent knows exactly which state it will be in; solution is a sequence   Non-observable  sensorless problem (conformant problem)  Agent may have no idea where it is; solution is a sequence   Nondeterministic and/or partially observable  contingency problem  percepts provide new information about current state  often interleave search, execution   Unknown state space  exploration problem
  • 11.
    11 Example: vacuum world Single-state, start in #5. Solution? 
  • 12.
    12 Example: vacuum world Single-state, start in #5. Solution? [Right, Suck]   Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? 
  • 13.
    13 Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck]  Contingency  Nondeterministic: Suck may dirty a clean carpet  Partially observable: location, dirt at current location.  Percept: [L, Clean], i.e., start in #5 or #7 Solution?
  • 14.
    14 Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck]   Contingency  Nondeterministic: Suck may dirty a clean carpet  Partially observable: location, dirt at current location.  Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]
  • 15.
    15 Selecting a statespace  Real world is absurdly complex  state space must be abstracted for problem solving  (Abstract) state = set of real states  (Abstract) action = complex combination of real actions  e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc.  For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind"   (Abstract) solution =  set of real paths that are solutions in the real world  Each abstract action should be "easier" than the original problem
  • 16.
    16 Vacuum world statespace graph  States: integer dirt and robot location.  Actions: Left, Right, Suck.  Goal test: no dirt at all locations.  Path cost: 1 per action.
  • 17.
    17 Example: The 8-puzzle States: Locations of tiles. (Q: How large is state space?)  Actions: Move blank left, right, up, down.  Goal test: s==goal state. (given)  Path cost: 1 per move. [Note: optimal solution of n-Puzzle family is NP-hard]
  • 18.
    18 Example: 8-queens  States:Any arrangement of 0 to 8 queens on the board.  Actions: Add a queen to any empty square.  Transition Model: Returns the board with a queen added to the specified square.  Goal Test: 8 queens on the board, none attacking.  Q: How many possible sequences? (~1.8x1014)  Revision to problem: arrange queens with one per column, in the leftmost n columns, with no queen attacking another.  Actions: Add a queen to any square in leftmost empty column (with no attacking) reduction to just 2,057 states!
  • 19.
    19 Example: robotic assembly States: Real-valued coordinates of robot joint angles parts of the object to be assembled.  Actions: Continuous motions of robot joints.  Goal test: Complete assembly.  Path cost: Time to execute.  Other examples: TSP (NP-hard), robot navigation, protein folding (unsolved).
  • 20.
    20 Searching for Solutions Recall that a solution is an actions sequence; accordingly, search algorithms work by considering various possible action sequences.  The possible action sequences starting at the initial state form a search tree with the initial state at the root; the branches are actions and the nodes correspond to state in the state space of the problem.  To consider taking various actions, we expand the current state – thereby generating a new set of states.  In this way, we add branches from the parent node to children nodes.
  • 21.
    21 Searching for Solutions A node with no children is called a leaf; the set of all leaf nodes available for expansion at any given point is called the frontier.  The process of expanding nodes on the frontier continues until either a solution is found or there are no more states to expand.  We consider the general TREE-SEARCH algorithm next.  All search algorithms share this basic structure; they vary primarily according to how they choose which state to expand next – the so- called search strategy.  In general, a TREE-SEARCH considers all possible paths (including infinite ones), whereas a GRAPH-SEARCH avoids consideration of redundant paths.
  • 22.
    22 Tree search algorithms Basic idea:  Offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
    27 Searching for Solutions Naturally, if we allow for redundant paths, then a formerly tractable problem can become intractable” “Algorithms that forget their history are doomed to repeat it.”  To avoid exploring redundant paths we can augment the TREE- SEARCH algorithm with a data structure called the explored set, which remembers every expanded node (we discard nodes in explored set instead of adding them to the frontier).
  • 28.
    28 Infrastructure for searchalgorithms  Search algorithms require a data structure to keep track of the search tree that is being constructed.  Foe each node n of the tree, we have a structure with (4) components:  (1) n.STATE: the state in the state space to which the node corresponds.  (2) n.PARENT: the node in the search tree that generated this node.  (3) n.ACTION: the action that was applied to the parent to generate the node.  (4) n.PATH-COST: the cost, traditionally denoted g(n), of the path from the initial state to the node, as indicated by the parent pointers.
  • 29.
    29 Implementation: states vs.nodes  A state is a (representation of) a physical configuration.  A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth.  The Expand function creates new nodes, filling in the various fields and using the Successor function of the problem to create the corresponding states.
  • 30.
    30 Infrastructure for searchalgorithms  Now that we have nodes (qua data structures), we need to put them somewhere.  We use a queue, with operations: EMPTY?(queue): returns true only if no elements POP(queue): removes the first element of the queue and returns it. INSERT(element, queue): inserts an element and returns the resulting queue.  Recall that queues are characterized by the order in which the store the inserted nodes: FIFO (first-in, first-out): pops the oldest element of the queue. LIFO (last-in, first-out, i.e. a stack) pops the newest element. PRIORITY QUEUE: pops element with highest “priority.”
  • 31.
    31 Search strategies  Asearch strategy is defined by picking the order of node expansion  Strategies are evaluated along the following dimensions:  completeness: does it always find a solution when one exists?  time complexity: number of nodes generated  space complexity: maximum number of nodes in memory  optimality: does it always find a least-cost solution?  Time and space complexity are measured in terms of  b: maximum branching factor of the search tree  d: depth of the least-cost solution  m: maximum depth of the state space (may be ∞)  The size of the state space graph (|V|+|E|)
  • 32.
    32 Search strategies  Inmore detail:  Time and space complexity are always considered with respect to some measure of the problem difficult (e.g. |V| + |E|).  In A.I., the state space graph is often represented implicitly by the initial state, actions and transition model (i.e. we don’t always store it explicitly).  Search algorithm complexity is frequently expressed in terms of: b: branching factor (maximum number of successors of any node) d: depth (of the shallowest goal node) m: maximum length of any path in the state space.  To assess the effectiveness of a search algorithm, we can consider just the search cost, which typically depends on time complexity (and/or memory usage); total cost combines search cost and path cost of the solution.
  • 33.
    33 Uninformed search strategies Uninformed search strategies use only the information available in the problem definition.  Breadth-first search  Uniform-cost search  Depth-first search  Depth-limited search  Iterative deepening search
  • 34.
    34 Breadth-first search  BFSis a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, etc.  BFS is an instance of the general graph-search algorithm in which the shallowest unexpanded node is chosen for expansion.  This is achieved by using a FIFO queue for the frontier. Accordingly, new nodes go to the back of the queue, and old nodes, which are shallower than the new nodes are expanded first.  NB: The goal test is applied to each node when it is generated.
  • 35.
    35 Breadth-first search  Expandshallowest unexpanded node  Implementation:  frontier is a FIFO queue, i.e., new successors go at end
  • 36.
    36 Breadth-first search  Expandshallowest unexpanded node  Implementation:  frontier is a FIFO queue, i.e., new successors go at end
  • 37.
    37 Breadth-first search  Expandshallowest unexpanded node  Implementation:  fringe is a FIFO queue, i.e., new successors go at end
  • 38.
    38 Breadth-first search  Expandshallowest unexpanded node  Implementation:  fringe is a FIFO queue, i.e., new successors go at end
  • 39.
    39 Properties of BFS Complete? Yes (if b is finite)  Time? 1+b+b2+b3+… +bd = O(bd)  Space? O(bd) (keeps every node in memory)  Optimal? Yes (if cost = 1 per step)  Space is the bigger problem (more than time)
  • 40.
    40 Uniform-cost search  Expandleast-cost unexpanded node  Implementation:  frontier = queue ordered by path cost  Equivalent to breadth-first if step costs all equal  Complete? Yes, if step cost ≥ ε  Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution  Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))  Optimal? Yes – nodes expanded in increasing order of g(n)
  • 41.
    41 Depth-first search  DFSalways expands the deepest node in the current frontier of the search tree. The search proceeds immediately to the deepest level fo the search tree, where the nodes have no successors.  As those nodes are expended, they are dropped from the frontier, so then the search “backs up” to the next deepest node that still has unexplored successors.  DFS is an instance of the general graph-search algorithm which uses a LIFO queue. This means that the most recently generated node is chosen for expansion.
  • 42.
    42 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 43.
    43 Depth-first search  Expanddeepest unexpanded node  Implementation:  fringe = LIFO queue, i.e., put successors at front
  • 44.
    44 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 45.
    45 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 46.
    46 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 47.
    47 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 48.
    48 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 49.
    49 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 50.
    50 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 51.
    51 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 52.
    52 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 53.
    53 Depth-first search  Expanddeepest unexpanded node  Implementation:  frontier = LIFO queue, i.e., put successors at front
  • 54.
    54 Properties of depth-firstsearch  Complete? No: fails in infinite-depth spaces, spaces with loops  Modify to avoid repeated states along path  complete in finite spaces  Time? O(bm): terrible if m is much larger than d  but if solutions are dense, may be much faster than breadth-first  Space? O(bm), i.e., linear space!  Optimal? No
  • 55.
    55 Depth-limited search  Thefailure of DFS in infinite state spaces can be alleviated by suppling DFS with a pre-determined depth limit l, i.e., nodes at depth l have no successors.
  • 56.
    56 Iterative deepening search Iterative deepening search is a general strategy, often used in combination with DFS tress search, that finds the best depth limit.  It does this by gradually increasing the limit – first 0, then 1, then 2, and so on – until a goal is found; this will occur when the depth limit reaches d, the depth of the shallowest goal node.  Note that iteratively deepening search may seem wasteful because states are generated multiple times, but this, in fact, turns out not to be too costly (the reason is that most of the nodes are in the bottom level for a constant branching factor).
  • 57.
    57 Iterative deepening search Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd  Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd  For b = 10, d = 5,  NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111  NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456  Overhead = (123,456 - 111,111)/111,111 = 11%
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
    63 Properties of iterativedeepening search  Complete? Yes  Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)  Space? O(bd)  Optimal? Yes, if step cost = 1
  • 64.
    64 Bidirectional search  Themain idea with bidirectional search is to run two simultaneous searches – one forward from the initial state and the other backward from the goal – hope that the two searches meet in the middle.  Key: bd/2+bd/2 << bd.  Replace goal test with check to see whether frontiers intersect.  Time complexity (with BFS in both directions): O(bd/2); space complexity: O(bd/2); space requirement is a serious weakness.  Also, it is not always a simple matter to “search backward” – goal state could be abstract.
  • 65.
  • 66.
    66 Repeated states  Failureto detect repeated states can turn a linear problem into an exponential one! 
  • 67.
  • 68.
    68 Summary  Before anagent can start searching for solutions, a goal must be identified and a well-defined problem formulated.  Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored  A problem consists of (5) parts: initial state, actions, transition model, goal test function and path cost function.  The environment of the problem is represented by the state space. A path through the state space from the initial state to a goal state is a solution.  TREE-SEARCH considers all possible paths; GRAPH-SEARCH avoids consideration of redundant paths.
  • 69.
    69 Summary  Search algorithmsare judged on the basis of completeness, optimality, time complexity and space complexity. Complexity depends on b, the branching factor in the state space and d, the depth of the shallowest solution.  Uniformed search methods have access only the problem definition, including: BFS – expands the shallowest nodes first Uniform-cost search – expands the node with the lowest path cost, g(n). DFS – expands the deepest unexpanded node first (depth-limited search adds a depth bound). Iterative Deepening Search – calls DFS with increasing depth limits until a goal is found. Bidirectional Search – can reduce time complexity but not always applicable.