KEMBAR78
chapter 3 Problem Solving using searching.pptx
Artificial Intelligence
Prepared by:
Ataklti Nguse
Chapter Three:
Problem Solving (Goal Based) Agents
01/08/2025 1
3.1. Problem-solving agents
• Problem-solving agents in AI mostly used these search strategies or
algorithms to solve a specific problem and provide the best result.
• The problem-solving agent preforms precisely by defining problems and its
several solutions.
• problem Solving Agent an agent that tries to come up with a sequence of
actions that will bring the environment into a desired state.
• Problem-solving agents are the goal-based agents.
• The process of looking for such a sequence of actions is called search
• Search techniques are universal problem-solving methods.
01/08/2025 2
Continued…
• Problems are the issues which comes across any system.
• A solution is needed to solve that particular problem.
• The process of solving a problem consists of five steps. These are:
– Defining the Problem
– Analyzing the Problem
– Identification of Solutions
– Choosing a Solution
– Implementation
01/08/2025 3
3.1.1. Four general steps in problem
solving:
1. Goal Formulation: It organizes the steps/sequence required to formulate
one goal out of multiple goals as well as actions to achieve that goal.
2. Problem Formulation: is the process of deciding what actions and states to
consider, to achieve the formulated goal.
•There are following five components involved in problem formulation:
–Initial State: It is the starting state or initial step of the agent towards its goal.
–Actions: It is the description of the possible actions available to the agent.
–Transition Model: other states in between initial and goal states
–Goal Test: It determines if the given state is a goal state.
–Path cost: It assigns a numeric cost to each path that follows the goal to the
number of paths
01/08/2025
4
Continued…
3. Search: Searching is a step by step procedure to solve a search-problem in a
given search space.
• Search tree: A tree representation of search problem is called Search tree.
• The root of the search tree is the root node which is corresponding to the
initial state.
• A state is a representation of a physical configuration
• A node is a data structure constituting part of a search tree
• It contains info such as: state, parent node, action, path cost g(x), depth
• A search algorithm takes a problem as input and returns a solution in the
form of an action sequence.
4. Execute the Solution: It is an action sequence which leads from the start
state/node to the goal /state node.
–Select the solution with lowest cost among all solutions(Optimal solution).
–
5
Examples of problems Solving
Road map of Ethiopia
01/08/2025 6
430
Jima
330
100
320
370
170
250
150
180
200
Addis Ababa
Gondar
Aksum
Mekele
Lalibela
Bahr dar
Gambela
Dire Dawa
Adama
Awasa
Dessie
Nekemt
100
80
110
230
330
400
230
Debre markos
Example: Road map of Ethiopia
• Let Current position of the agent: Awasa.
• Needs to arrive to: Gondar
• Formulate goal:
– be in Gondar
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Awasa, Adama, Addis Ababa, Dessie, Gondar
01/08/2025 7
Example: vacuum world
 Single-state
Starting state is in #5.
What is the Solution?
01/08/2025 8
Example: vacuum world
• Single-state, start in #2.
Solution?
01/08/2025 9
The state space of the Romania problem
• Road map of Romania
01/08/2025 10
Continued…
• Initial state: at Arad
• Actions: the successor function S:
– S(Arad) = {<AradZerind, Zerind>, <AradSibiu, Sibiu>,
• <AradTimisoara, Timisoara}
– S(Sibiu) = {<SibiuArad, Arad>, <SibiuOradea, Oradea>,
• <SibiuFagaras, Fagaras>, SibiuRimnicu Vilcea, Rimnicu Vilcea>}
etc.
• Goal test: at Bucharest
• Path cost:
– c(Arad, AradZerind, Zerind) = 75, c(Arad, AradSibiu, Sibiu) = 140,
• c(Arad, AradTimisoara, Timisoara) = 118, etc.
• A solution is a sequence of actions leading from the initial state to a goal
state
01/08/2025 11
Example: The 8-puzzle
• states?
• actions?
• goal test?
• path cost?
01/08/2025 12
Example: The 8-puzzle
• states? locations of tiles
• actions? move blank left, right, up, down
• goal test? = goal state (given)
• path cost? 1 per move
01/08/2025 13
Assignment: Construct the state space of the problem and
solve this problem using tree searching method
• The following are some more problems
– 1The Goat, Cabbage, Wolf Problem
– 2Tower of Hanoi Problem.
– 3N-Queen Problem e.g. the 8-queens problem.
– 4The three cannibal and the three missionaries problem
01/08/2025 14
Continued…
01/08/2025 15
Rule for
 Missionary-and-cannibal problem:
 Three missionaries and three cannibals are on one side of a river that they
wish to cross.
 There is a boat that can hold one or two people.
 Find an action sequence that brings everyone safely to the opposite bank
(i.e. Cross the river).
 But you must never leave a group of missionaries outnumbered by cannibals
on the same bank (in any place).
01/08/2025 16
3.1.2. Searching for Solution (Tree search algorithms)
• Given state space, and network of states via actions.
• The network structure is usually a graph
• Tree is a network in which there is exactly one path defined from the root to
any node
• Given state S and valid actions being at S
– the set of next state generated by executing each action is called
successor of S.
• Searching for solution is a simulated exploration of state space by generating
successors.
01/08/2025 17
Vacuum world state space graph
• states?
• actions?
• goal test?
• path cost?
01/08/2025 18
Continued…
• Initial states? Information on dirt and robot location (one of the 8 states)
• actions? Left, Right, Suck
• goal test? no dirt at all locations
• path cost? 1 per action
01/08/2025 19
Tree search example
AA
Awasa
Addis Ababa
Nazarez
Nekemt
Adama
Awasa
Dessie
Dire
Dawa Jima
Gambela
Awasa
Debre
Markos
Gondar
Lalibela AA
AA
BahrDar
Gondar Debre M.
Gambela
01/08/2025 20
8-Puzzle
01/08/2025 21
Examples of Search Problems/ Tree
search representation
• 8-PUZZLE problem
• Route Finding path: Search through set of paths
–Looking for one which will minimize distance
• Chess: search through set of possible moves
–Looking for one which will best improve position
• Missionaries and Cannibals: Search through set of possible crossing the
river
–Looking for one which transports missionaries & cannibals
01/08/2025 22
3.2. Search strategies/Tree searching states
 A search strategy is defined by picking the order of node expansion
 Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution if one exists?
 time complexity: how many operations are needed?
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)
 Generally, searching strategies can be classified in to two as uninformed and informed
search strategies
01/08/2025 23
Continued…
01/08/2025 24
G
m
b
d
3.2.1. Uninformed/Blind Search
• The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal.
• Use no knowledge about which path is likely to be best
• It can be divided into five/sex main types:
I. Breadth-first search
II. Uniform cost search
III. Depth-first search(Depth limited search)
IV. Iterative deepening depth-first search
V. Bidirectional Search
01/08/2025 25
Breadth first search
•Expand shallowest unexpanded node,
–i.e. expand all nodes on a given level of the search
tree before moving to the next level
•Implementation: fringe/open list is a first-in-first-
out (FIFO) queue, i.e.,
•new successors go at end of the queue.
-Is A a goal state?
–Expand: -fringe = [B,C] ,Is B a goal state?
–Expand: fringe=[C,D,E],Is C a goal state?
–Expand: fringe=[D,E,F,G],is D a goal state?
– Pop nodes from the front of the queue
–Then return the final path
–{A,C,G} The solution path is recovered by
following the back pointers starting at the goal
state
01/08/2025
26
Properties of breadth-first search
• Complete? Yes (if b is finite)
• Time Complexity? 1+b+b2
+b3
+… +bd
=O(bd
)
• Space Complexity? O(bd
) (keeps every node in memory)
• Optimal? Yes, if cost = 1 or all are constant path cost per step(not
optimal in general)
01/08/2025 27
Exercise
01/08/2025 28
Depth-first search
•Expand one of the node at the deepest level of the
tree.
•It is called the depth-first search because it starts
from the root node and follows each path to its
greatest depth node before moving to the next path.
•Implementation: DFS uses a stack data structure
for its implementation.
•Pop nodes from the top of the stack
•Properties
–Incomplete and not optimal: fails in infinite-depth
spaces, spaces with loops.
• Modify to avoid repeated states along the path
–Takes less space (Linear): Only needs to
remember up to the depth expanded,
–Space? O(bm), i.e., linear space!
–Time? O(bm
):
01/08/2025
29
Continued…
01/08/2025 30
Depth-limited search
• Search depth-first, but terminate a path either if a goal state is found,
or if the maximum depth allowed is reached.
• Unlike DFS, this algorithm always terminates
• Avoids the problem of search never terminating by imposing a hard
• limit on the depth of any search path
• •However, it is still not complete(the goal depth may be greater than
the limit allowed.
• It return solution if solution exist, if there is no solution
• it return cutoff if l < m, failure otherwise
31
Depth-limited search
• Complete? No (fail if all solution exist at depth > l
• Time? O(bl
)
• Space? O(bl)
• Optimal? No
32
•To avoid the infinite depth problem of DFS, we can decide to only search until
depth L, i.e. we don’t expand beyond depth L.
–Depth-Limited Search
•What if solution is deeper than L? Increase L iteratively.
–Iterative Deepening Search
• This search combines the benefits of DFS and BFS
–DFS is efficient in space, but has no path-length guarantee
–BFS finds min-step path towards the goal, but requires memory space
–IDS performs a sequence of DFS searches with increasing depth-cutoff until
goal is found
Iterative Deepening Search (IDS)
Limit=0 Limit=1 Limit=2
01/08/2025
Iterative deepening search l =0,l=1,l=2
34
Properties of iterative deepening search
• Complete? Yes
• Time? (d+1)b0
+ d b1
+ (d-1)b2
+ … + bd
= O(bd
)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
35
Uniform cost Search
A
G
S
C
5
5
1 10
15 5
B
S
S S S
A A A
B B B
C C C
G G G
0
1 5 15
11
15
5
11 10
15
•Strategy to select state to expand next
•Use the state with the smallest value of g() so far
•Use priority queue for efficient access to minimum g at every iteration
•The goal of this technique is to find the shortest path to the goal in terms of
cost.
–It modifies the BFS by always expanding least-cost unexpanded node
•Implementation: nodes in list keep track of total path length from start to that
node
–List kept in priority queue ordered by path cost
•Properties:
36
Comparing Uninformed Search
• b is branching factor,
• d is depth of the shallowest solution,
• m is the maximum depth of the search tree,
• l is the depth limit
Strategies Complete Optimal Time
complexity
Space
complexity
Breadth first search yes yes O(bd
) O(bd
)
Depth first search no no O(bm
) O(bm)
Depth limited search no no O(bl
) O(bl)
Uniform cost search yes yes O(bd
) O(bd
)
Iterative deepening
search
yes yes O(bd
) O(bd)
01/08/2025
37
3.2.2. Informed search algorithms
• Informed search algorithms attempt to use extra domain
knowledge to inform the search, in an attempt to reduce search
time.
• A particular class of informed search algorithms is known as
best-first search.
• In best-first search, we use a heuristic function to estimate which
of the nodes in the fringe is the “best” node for expansion.
• This heuristic function, h(n), estimates the cost of the cheapest
path from node n to the goal state. In other words, it tells us
which of the nodes in the fringe it think is “closest” to the goal.
• best-first search algorithms examples are :-
• greedy best-first and A* search.
Greedy Best-First Search
• The simplest best-first search algorithm is greedy best-first search
• Simply expands the node that is estimated to be closest to the goal
• Which means the lowest value of the heuristic function h(n).
40
Continued …
 Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to
goal
 That means the agent prefers to choose the action which is assumed to
be best after every action
 e.g., hSLD(n) = straight-line distance from n to X (destination)
 Greedy best-first search expands the node that appears to be closest to
goal (It tries to minimizes the estimated cost to reach the goal)
Greedy Search
f(n)= h(n)
# of nodes tested 1, expanded 1 S
h=8
B
h=4
G
h=0
C
h=3
A
h=8
D
h=
E
h=
1 8
5
9 4
5
3
7
Expanded Node OPEN list
(S:8)
S not goal (C:3,B:4,A:8)
Greedy Search
f(n)= h(n)
# of nodes tested 2, expanded 2
S
h=8
B
h=4
G
h=0
C
h=3
A
h=8
D
h=
E
h=
1 8
5
9 4
5
3
7
Expanded Node OPEN list
(S:8)
S (C:3,B:4,A:8)
C not goal (G:0,B:4,A:8)
Greedy Search
f(n)= h(n)
# of nodes tested 3, expanded 2
S
h=8
B
h=4
G
h=0
C
h=3
A
h=8
D
h=
E
h=
1 8
5
9 4
5
3
7
Expanded Node OPEN list
(S:8)
S (C:3,B:4,A:8)
C (G:0,B:4,A:8)
G goal (B:4.A:8)
no expand
Greedy Search
f(n)= h(n)
# of nodes tested 3, expanded 2
S
h=8
B
h=4
G
h=0
C
h=3
A
h=8
D
h=
E
h=
1 8
5
9 4
5
3
7
Expanded Node OPEN list
(S:8)
S (C:3,B:4,A:8)
C (G:0,B:4,A:8)
G goal (B:4.A:8)
* Fast but not optimal
Path: S,C,G
Cost: 13
Romania with step costs in km
Greedy search example
Greedy search example
140
118
75
Greedy search example
h=253
140
118
75
140
99 151 80
Greedy search example
h=253
• Total cost is ‘g’ = 140 + 99 + 211 = 450
• Note: If Sibu, Rimnicu Vilcea, Pitesti, path is chosen the ‘g’ = 418
140
99 151 80
99 211
140
50
Properties of greedy best-first search
• Complete? Yes if repetition is controlled otherwise it can
get stuck in loops
• Time? O(bm
), but a good heuristic can give dramatic
improvement
• Space? O(bm
), keeps all nodes in memory
• Optimal? No
51
A* search
Idea: avoid expanding paths that are already
expensive
Evaluation function f(n) = g(n) + h(n) where
g(n) = cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n to goal
It tries to minimizes the total path cost to reach into
the goal at every node N.
52
• Given the following tree structure, show the
content of the open list and closed list generated
by A* best first search algorithm
Example one
S
B
A C
F
E
D H
J
I
35
25
15
40
10
18
62
20 5
70
21
45
G2
G1
G3
Heuristic
S  G -------------- 90
A  G -------------- 70
B  G -------------- 70
C  G -------------- 60
D  G -------------- 55
E  G -------------- 30
F  G -------------- 35
H  G ---------------10
I  G ---------------- 20
J  G ---------------- 8
G1,G2,G3  G ------------ 0
53
Admissible heuristics
 A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*
(n), where h*
(n) is the true cost to reach the goal
state from n.
 An admissible heuristic never overestimates the cost to
reach the goal, i.e., it is optimistic
 Example: hSLD(n) (never overestimates the actual road
distance)
 Theorem: If h(n) is admissible, A*
using TREE-SEARCH is
optimal
Example
S
h=8
B
h=4
G
h=0
C
h=3
A
h=8
D
h=
E
h=
2 9
6
10 5
6
4
8
n g(n) h(n) f(n) h*(n)
S 0 8 8 11
A 2 8 10 10
B 6 4 10 5
C 9 3 12 6
D 6   
E 12   
G 12/11/
15
0 12/11/
15
0
Since h(n)  h*(n)  n, h is admissible.
55
Find Admissible heuristics for the 8-puzzle?
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance (i.e., no. of squares from desired
location of each tile). This is also called city cap distance
• h1(S) = ?
• h2(S) = ?
56
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles(the number of tile in the wrong position)
• h2(n) = total Manhattan distance(the sum of distance tile from their goal)
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
A*
search example
A*
search example
A*
search example
A*
search example
A*
search example
A*
search example
Exercises
• Consider the following state space.
• There are five locations, identified by the letters A to E, and the
numbers shown are the step costs to move between them.
• Also shown is the value of a heuristic function for each state:
• the straight-line distance to reach E.
• The initial state of the problem is A, and the goal is to reach E
Exercises
• Draw the search tree that would result after each iteration of
greedy best-first search. Is the solution optimal?
• Draw the search tree that would result after each iteration of A*
search. Is the solution optimal

chapter 3 Problem Solving using searching.pptx

  • 1.
    Artificial Intelligence Prepared by: AtakltiNguse Chapter Three: Problem Solving (Goal Based) Agents 01/08/2025 1
  • 2.
    3.1. Problem-solving agents •Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. • The problem-solving agent preforms precisely by defining problems and its several solutions. • problem Solving Agent an agent that tries to come up with a sequence of actions that will bring the environment into a desired state. • Problem-solving agents are the goal-based agents. • The process of looking for such a sequence of actions is called search • Search techniques are universal problem-solving methods. 01/08/2025 2
  • 3.
    Continued… • Problems arethe issues which comes across any system. • A solution is needed to solve that particular problem. • The process of solving a problem consists of five steps. These are: – Defining the Problem – Analyzing the Problem – Identification of Solutions – Choosing a Solution – Implementation 01/08/2025 3
  • 4.
    3.1.1. Four generalsteps in problem solving: 1. Goal Formulation: It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal. 2. Problem Formulation: is the process of deciding what actions and states to consider, to achieve the formulated goal. •There are following five components involved in problem formulation: –Initial State: It is the starting state or initial step of the agent towards its goal. –Actions: It is the description of the possible actions available to the agent. –Transition Model: other states in between initial and goal states –Goal Test: It determines if the given state is a goal state. –Path cost: It assigns a numeric cost to each path that follows the goal to the number of paths 01/08/2025 4
  • 5.
    Continued… 3. Search: Searchingis a step by step procedure to solve a search-problem in a given search space. • Search tree: A tree representation of search problem is called Search tree. • The root of the search tree is the root node which is corresponding to the initial state. • A state is a representation of a physical configuration • A node is a data structure constituting part of a search tree • It contains info such as: state, parent node, action, path cost g(x), depth • A search algorithm takes a problem as input and returns a solution in the form of an action sequence. 4. Execute the Solution: It is an action sequence which leads from the start state/node to the goal /state node. –Select the solution with lowest cost among all solutions(Optimal solution). – 5
  • 6.
    Examples of problemsSolving Road map of Ethiopia 01/08/2025 6 430 Jima 330 100 320 370 170 250 150 180 200 Addis Ababa Gondar Aksum Mekele Lalibela Bahr dar Gambela Dire Dawa Adama Awasa Dessie Nekemt 100 80 110 230 330 400 230 Debre markos
  • 7.
    Example: Road mapof Ethiopia • Let Current position of the agent: Awasa. • Needs to arrive to: Gondar • Formulate goal: – be in Gondar • Formulate problem: – states: various cities – actions: drive between cities • Find solution: – sequence of cities, e.g., Awasa, Adama, Addis Ababa, Dessie, Gondar 01/08/2025 7
  • 8.
    Example: vacuum world Single-state Starting state is in #5. What is the Solution? 01/08/2025 8
  • 9.
    Example: vacuum world •Single-state, start in #2. Solution? 01/08/2025 9
  • 10.
    The state spaceof the Romania problem • Road map of Romania 01/08/2025 10
  • 11.
    Continued… • Initial state:at Arad • Actions: the successor function S: – S(Arad) = {<AradZerind, Zerind>, <AradSibiu, Sibiu>, • <AradTimisoara, Timisoara} – S(Sibiu) = {<SibiuArad, Arad>, <SibiuOradea, Oradea>, • <SibiuFagaras, Fagaras>, SibiuRimnicu Vilcea, Rimnicu Vilcea>} etc. • Goal test: at Bucharest • Path cost: – c(Arad, AradZerind, Zerind) = 75, c(Arad, AradSibiu, Sibiu) = 140, • c(Arad, AradTimisoara, Timisoara) = 118, etc. • A solution is a sequence of actions leading from the initial state to a goal state 01/08/2025 11
  • 12.
    Example: The 8-puzzle •states? • actions? • goal test? • path cost? 01/08/2025 12
  • 13.
    Example: The 8-puzzle •states? locations of tiles • actions? move blank left, right, up, down • goal test? = goal state (given) • path cost? 1 per move 01/08/2025 13
  • 14.
    Assignment: Construct thestate space of the problem and solve this problem using tree searching method • The following are some more problems – 1The Goat, Cabbage, Wolf Problem – 2Tower of Hanoi Problem. – 3N-Queen Problem e.g. the 8-queens problem. – 4The three cannibal and the three missionaries problem 01/08/2025 14
  • 15.
  • 16.
    Rule for  Missionary-and-cannibalproblem:  Three missionaries and three cannibals are on one side of a river that they wish to cross.  There is a boat that can hold one or two people.  Find an action sequence that brings everyone safely to the opposite bank (i.e. Cross the river).  But you must never leave a group of missionaries outnumbered by cannibals on the same bank (in any place). 01/08/2025 16
  • 17.
    3.1.2. Searching forSolution (Tree search algorithms) • Given state space, and network of states via actions. • The network structure is usually a graph • Tree is a network in which there is exactly one path defined from the root to any node • Given state S and valid actions being at S – the set of next state generated by executing each action is called successor of S. • Searching for solution is a simulated exploration of state space by generating successors. 01/08/2025 17
  • 18.
    Vacuum world statespace graph • states? • actions? • goal test? • path cost? 01/08/2025 18
  • 19.
    Continued… • Initial states?Information on dirt and robot location (one of the 8 states) • actions? Left, Right, Suck • goal test? no dirt at all locations • path cost? 1 per action 01/08/2025 19
  • 20.
    Tree search example AA Awasa AddisAbaba Nazarez Nekemt Adama Awasa Dessie Dire Dawa Jima Gambela Awasa Debre Markos Gondar Lalibela AA AA BahrDar Gondar Debre M. Gambela 01/08/2025 20
  • 21.
  • 22.
    Examples of SearchProblems/ Tree search representation • 8-PUZZLE problem • Route Finding path: Search through set of paths –Looking for one which will minimize distance • Chess: search through set of possible moves –Looking for one which will best improve position • Missionaries and Cannibals: Search through set of possible crossing the river –Looking for one which transports missionaries & cannibals 01/08/2025 22
  • 23.
    3.2. Search strategies/Treesearching states  A search strategy is defined by picking the order of node expansion  Strategies are evaluated along the following dimensions:  completeness: does it always find a solution if one exists?  time complexity: how many operations are needed?  space complexity: maximum number of nodes in memory  optimality: does it always find a least-cost solution?  Time and space complexity are measured in terms of  b: maximum branching factor of the search tree  d: depth of the least-cost solution  m: maximum depth of the state space (may be ∞)  Generally, searching strategies can be classified in to two as uninformed and informed search strategies 01/08/2025 23
  • 24.
  • 25.
    3.2.1. Uninformed/Blind Search •The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. • Use no knowledge about which path is likely to be best • It can be divided into five/sex main types: I. Breadth-first search II. Uniform cost search III. Depth-first search(Depth limited search) IV. Iterative deepening depth-first search V. Bidirectional Search 01/08/2025 25
  • 26.
    Breadth first search •Expandshallowest unexpanded node, –i.e. expand all nodes on a given level of the search tree before moving to the next level •Implementation: fringe/open list is a first-in-first- out (FIFO) queue, i.e., •new successors go at end of the queue. -Is A a goal state? –Expand: -fringe = [B,C] ,Is B a goal state? –Expand: fringe=[C,D,E],Is C a goal state? –Expand: fringe=[D,E,F,G],is D a goal state? – Pop nodes from the front of the queue –Then return the final path –{A,C,G} The solution path is recovered by following the back pointers starting at the goal state 01/08/2025 26
  • 27.
    Properties of breadth-firstsearch • Complete? Yes (if b is finite) • Time Complexity? 1+b+b2 +b3 +… +bd =O(bd ) • Space Complexity? O(bd ) (keeps every node in memory) • Optimal? Yes, if cost = 1 or all are constant path cost per step(not optimal in general) 01/08/2025 27
  • 28.
  • 29.
    Depth-first search •Expand oneof the node at the deepest level of the tree. •It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path. •Implementation: DFS uses a stack data structure for its implementation. •Pop nodes from the top of the stack •Properties –Incomplete and not optimal: fails in infinite-depth spaces, spaces with loops. • Modify to avoid repeated states along the path –Takes less space (Linear): Only needs to remember up to the depth expanded, –Space? O(bm), i.e., linear space! –Time? O(bm ): 01/08/2025 29
  • 30.
  • 31.
    Depth-limited search • Searchdepth-first, but terminate a path either if a goal state is found, or if the maximum depth allowed is reached. • Unlike DFS, this algorithm always terminates • Avoids the problem of search never terminating by imposing a hard • limit on the depth of any search path • •However, it is still not complete(the goal depth may be greater than the limit allowed. • It return solution if solution exist, if there is no solution • it return cutoff if l < m, failure otherwise 31
  • 32.
    Depth-limited search • Complete?No (fail if all solution exist at depth > l • Time? O(bl ) • Space? O(bl) • Optimal? No 32
  • 33.
    •To avoid theinfinite depth problem of DFS, we can decide to only search until depth L, i.e. we don’t expand beyond depth L. –Depth-Limited Search •What if solution is deeper than L? Increase L iteratively. –Iterative Deepening Search • This search combines the benefits of DFS and BFS –DFS is efficient in space, but has no path-length guarantee –BFS finds min-step path towards the goal, but requires memory space –IDS performs a sequence of DFS searches with increasing depth-cutoff until goal is found Iterative Deepening Search (IDS) Limit=0 Limit=1 Limit=2 01/08/2025
  • 34.
  • 35.
    Properties of iterativedeepening search • Complete? Yes • Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd ) • Space? O(bd) • Optimal? Yes, if step cost = 1 35
  • 36.
    Uniform cost Search A G S C 5 5 110 15 5 B S S S S A A A B B B C C C G G G 0 1 5 15 11 15 5 11 10 15 •Strategy to select state to expand next •Use the state with the smallest value of g() so far •Use priority queue for efficient access to minimum g at every iteration •The goal of this technique is to find the shortest path to the goal in terms of cost. –It modifies the BFS by always expanding least-cost unexpanded node •Implementation: nodes in list keep track of total path length from start to that node –List kept in priority queue ordered by path cost •Properties: 36
  • 37.
    Comparing Uninformed Search •b is branching factor, • d is depth of the shallowest solution, • m is the maximum depth of the search tree, • l is the depth limit Strategies Complete Optimal Time complexity Space complexity Breadth first search yes yes O(bd ) O(bd ) Depth first search no no O(bm ) O(bm) Depth limited search no no O(bl ) O(bl) Uniform cost search yes yes O(bd ) O(bd ) Iterative deepening search yes yes O(bd ) O(bd) 01/08/2025 37
  • 38.
    3.2.2. Informed searchalgorithms • Informed search algorithms attempt to use extra domain knowledge to inform the search, in an attempt to reduce search time. • A particular class of informed search algorithms is known as best-first search. • In best-first search, we use a heuristic function to estimate which of the nodes in the fringe is the “best” node for expansion. • This heuristic function, h(n), estimates the cost of the cheapest path from node n to the goal state. In other words, it tells us which of the nodes in the fringe it think is “closest” to the goal. • best-first search algorithms examples are :- • greedy best-first and A* search.
  • 39.
    Greedy Best-First Search •The simplest best-first search algorithm is greedy best-first search • Simply expands the node that is estimated to be closest to the goal • Which means the lowest value of the heuristic function h(n).
  • 40.
    40 Continued …  Evaluationfunction f(n) = h(n) (heuristic) = estimate of cost from n to goal  That means the agent prefers to choose the action which is assumed to be best after every action  e.g., hSLD(n) = straight-line distance from n to X (destination)  Greedy best-first search expands the node that appears to be closest to goal (It tries to minimizes the estimated cost to reach the goal)
  • 41.
    Greedy Search f(n)= h(n) #of nodes tested 1, expanded 1 S h=8 B h=4 G h=0 C h=3 A h=8 D h= E h= 1 8 5 9 4 5 3 7 Expanded Node OPEN list (S:8) S not goal (C:3,B:4,A:8)
  • 42.
    Greedy Search f(n)= h(n) #of nodes tested 2, expanded 2 S h=8 B h=4 G h=0 C h=3 A h=8 D h= E h= 1 8 5 9 4 5 3 7 Expanded Node OPEN list (S:8) S (C:3,B:4,A:8) C not goal (G:0,B:4,A:8)
  • 43.
    Greedy Search f(n)= h(n) #of nodes tested 3, expanded 2 S h=8 B h=4 G h=0 C h=3 A h=8 D h= E h= 1 8 5 9 4 5 3 7 Expanded Node OPEN list (S:8) S (C:3,B:4,A:8) C (G:0,B:4,A:8) G goal (B:4.A:8) no expand
  • 44.
    Greedy Search f(n)= h(n) #of nodes tested 3, expanded 2 S h=8 B h=4 G h=0 C h=3 A h=8 D h= E h= 1 8 5 9 4 5 3 7 Expanded Node OPEN list (S:8) S (C:3,B:4,A:8) C (G:0,B:4,A:8) G goal (B:4.A:8) * Fast but not optimal Path: S,C,G Cost: 13
  • 45.
    Romania with stepcosts in km
  • 46.
  • 47.
  • 48.
  • 49.
    Greedy search example h=253 •Total cost is ‘g’ = 140 + 99 + 211 = 450 • Note: If Sibu, Rimnicu Vilcea, Pitesti, path is chosen the ‘g’ = 418 140 99 151 80 99 211 140
  • 50.
    50 Properties of greedybest-first search • Complete? Yes if repetition is controlled otherwise it can get stuck in loops • Time? O(bm ), but a good heuristic can give dramatic improvement • Space? O(bm ), keeps all nodes in memory • Optimal? No
  • 51.
    51 A* search Idea: avoidexpanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) where g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal It tries to minimizes the total path cost to reach into the goal at every node N.
  • 52.
    52 • Given thefollowing tree structure, show the content of the open list and closed list generated by A* best first search algorithm Example one S B A C F E D H J I 35 25 15 40 10 18 62 20 5 70 21 45 G2 G1 G3 Heuristic S  G -------------- 90 A  G -------------- 70 B  G -------------- 70 C  G -------------- 60 D  G -------------- 55 E  G -------------- 30 F  G -------------- 35 H  G ---------------10 I  G ---------------- 20 J  G ---------------- 8 G1,G2,G3  G ------------ 0
  • 53.
    53 Admissible heuristics  Aheuristic h(n) is admissible if for every node n, h(n) ≤ h* (n), where h* (n) is the true cost to reach the goal state from n.  An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic  Example: hSLD(n) (never overestimates the actual road distance)  Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal
  • 54.
    Example S h=8 B h=4 G h=0 C h=3 A h=8 D h= E h= 2 9 6 10 5 6 4 8 ng(n) h(n) f(n) h*(n) S 0 8 8 11 A 2 8 10 10 B 6 4 10 5 C 9 3 12 6 D 6    E 12    G 12/11/ 15 0 12/11/ 15 0 Since h(n)  h*(n)  n, h is admissible.
  • 55.
    55 Find Admissible heuristicsfor the 8-puzzle? • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile). This is also called city cap distance • h1(S) = ? • h2(S) = ?
  • 56.
    56 Admissible heuristics E.g., forthe 8-puzzle: • h1(n) = number of misplaced tiles(the number of tile in the wrong position) • h2(n) = total Manhattan distance(the sum of distance tile from their goal) (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
    Exercises • Consider thefollowing state space. • There are five locations, identified by the letters A to E, and the numbers shown are the step costs to move between them. • Also shown is the value of a heuristic function for each state: • the straight-line distance to reach E. • The initial state of the problem is A, and the goal is to reach E
  • 64.
    Exercises • Draw thesearch tree that would result after each iteration of greedy best-first search. Is the solution optimal? • Draw the search tree that would result after each iteration of A* search. Is the solution optimal