KEMBAR78
AI Chapter Two Solving problems by searching.pdf
College of Engineering
Department of Software Engineering
Chapter Two
Solving Problems by Searching
1
Dr. Zeleke A.
AASTU, October 2024
Outlines
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
Solving Problems by Searching
Previous class
Agents are used to provide a consistent viewpoint on various
topics in the field of AI.
 Problem-solving agents are a type of goal-based agent that operate by
finding sequences of actions to achieve specific goals.
Essential concepts:
 Agents interact with environment by means of sensors and
actuators.
 A rational agent does “the right thing” ≡ maximizes a
performance measure
➨ PEAS
• Environment types: observable, deterministic, episodic, static,
discrete
• Agent types: table driven (rule based), simple reflex, model-
based reflex, goal-based, utility-based, learning agent
Solving Problems by Searching
Structure of Agents
Agent = Architecture + Program
• Architecture
• operating platform of the agent
• computer system, specific hardware, possibly OS
• Program
• function that implements the mapping from percepts
to actions
Solving Problems by Searching
Reflex agent is simple
 base their actions on
 a direct mapping from states to actions
 but cannot work well in environments
 which this mapping would be too large to store
 and would take too long to learn
Hence, goal-based agent is used
Solving Problems by Searching
Problem-solving agent
Problem-solving agent
 A kind of goal-based agent
 It solves problem by
 finding sequences of actions that lead to desirable
states (goals)
 To solve a problem,
 the first step is the goal formulation, based on
the current situation
Solving Problems by Searching
Assume our agents always have access to information about the world, such as
the map in Figure 3.1.
With that information, the agent can follow this four-phase problem-solving
process:
1. Goal formulation: The agent adopts the goal of reaching Bucharest.
 Goals organize behavior by limiting the objectives and hence the actions to
be considered.
2. Problem formulation: The agent devises a description of the states and actions
necessary to reach the goal—an abstract model of the relevant part of the world.
• For our agent, one good model is to consider the actions of traveling from one
city to an adjacent city, and therefore the only fact about the state of the world
that will change due to an action is the current city.
3. Search: Before taking any action in the real world, the agent simulates
sequences of actions in its model, searching until it finds a sequence of actions that
reaches the goal. Such a sequence is called a solution. The agent might have to
simulate multiple sequences that do not reach the goal, but eventually it will find a
solution (such as going from Arad to Sibiu to Fagaras to Bucharest), or it will find
that no solution is possible.
4. Execution: The agent can now execute the actions in the solution, one at a time.
Goal formulation
The goal is formulated
 as a set of world states, in which the goal is satisfied
Reaching from initial state  goal state
 Actions are required
Actions are the operators
 causing transitions between world states
 Actions should be abstract enough at a certain degree,
instead of very detailed
 E.g., turn left VS turn left 30 degree, etc.
Solving Problems by Searching
Problem formulation
The process of deciding
 what actions and states to consider
E.g., driving Addis Ababa  Hawasa
 in-between states and actions defined
 States: Some places in Addis & Hawasa
 Actions: Turn left, Turn right, go straight,
accelerate & brake, etc.
Solving Problems by Searching
Search
Because there are many ways to achieve
the same goal
 Those ways are together expressed as a tree
 Multiple options of unknown value at a point,
 the agent can examine different possible sequences
of actions, and choose the best
 This process of looking for the best sequence
is called search
 The best sequence is then a list of actions,
called solution
Solving Problems by Searching
Search algorithm
Defined as
 taking a problem
 and returns a solution
Once a solution is found
 the agent follows the solution
 and carries out the list of actions – execution
phase
Design of an agent
 “Formulate, search, execute”
Solving Problems by Searching
Solving Problems by Searching
Well-defined problems and solutions
A problem is defined by 5 components:
Initial state
Actions
Transition model or Successor functions
Goal Test
Path Cost
Solving Problems by Searching
Well-defined problems and solutions
A problem is defined by 5 components:
 The initial state
 that the agent starts in
 The set of possible actions
 Transition model: description of what each action does.
(successor functions): refer to any state reachable from
given state by a single action
 Initial state, actions and Transition model define the
state space
 the set of all states reachable from the initial state by any
sequence of actions.
 A path in the state space:
 any sequence of states connected by a sequence of actions.
Solving Problems by Searching
Well-defined problems and solutions
The goal test
 Applied to the current state to test
 if the agent is in its goal
-Sometimes there is an explicit set of possible
goal states. (example: in Hawasa).
-Sometimes the goal is described by the
properties
 instead of stating explicitly the set of states
 Example: Chess
 the agent wins if it can capture the KING of the
opponent on next move ( checkmate).
 no matter what the opponent does
Solving Problems by Searching
Well-defined problems and solutions
A path cost function,
 assigns a numeric cost to each path
 = performance measure
 denoted by g
 to distinguish the best path from others
Usually the path cost is
 the sum of the step costs of the individual
actions (in the action list)
Solving Problems by Searching
Well-defined problems and solutions
Together a problem is defined by
 Initial state
 Actions
 Successor function
 Goal test
 Path cost function
The solution of a problem is then
 a path from the initial state to a state satisfying the goal
test
Optimal solution
 the solution with lowest path cost among all solutions
Solving Problems by Searching
Formulating problems
Besides the five components for problem
formulation
 anything else?
Abstraction
 the process to take out the irrelevant information
 leave the most essential parts to the description of the
states
( Remove detail from representation)
 Conclusion: Only the most important parts that are
contributing to searching are used
Solving Problems by Searching
Evaluation Criteria
Formulation of a problem as search task
Basic search strategies
Important properties of search strategies
Selection of search strategies for specific
tasks
(The ordering of the nodes in FRINGE defines
the search strategy)
Solving Problems by Searching
Problem-Solving Agents
agents whose task is to solve a particular
problem (steps)
 goal formulation
 what is the goal state
 what are important characteristics of the goal state
 how does the agent know that it has reached the
goal
 are there several possible goal states
 are they equal or are some more preferable
 problem formulation
 what are the possible states of the world relevant for
solving the problem
 what information is accessible to the agent
 how can the agent progress from state to state
Solving Problems by Searching
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal:
 be in Bucharest

Formulate problem:
 states: various cities
 actions: drive between cities
Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

Solving Problems by Searching
Example: Romania
Solving Problems by Searching
Single-state problem formulation
A problem is defined by four items:
1.Initial state e.g., "at Arad"
2.Actions or successor function S(x) = set of action–state
pairs
 e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
3.Goal test, can be
 explicit, e.g., x = "at Bucharest"
 implicit, e.g., Checkmate(x)
4.Path cost (additive)
 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the initial
state to a goal state
Solving Problems by Searching
Example problems
Toy problems
 those intended to illustrate or exercise
various problem-solving methods
 E.g., puzzle, chess, etc.
Real-world problems
 tend to be more difficult and whose
solutions people actually care about
 E.g., Design, planning, etc.
Solving Problems by Searching
Toy problems
Example: vacuum world
Number of states: 8
Initial state: Any
Number of actions: 4
 left, right, suck,
noOp
Goal: clean up all dirt
 Goal states: {7, 8}
 Path Cost:
 Each step costs 1
Solving Problems by Searching
Solving Problems by Searching
The 8-puzzle
Solving Problems by Searching
The 8-puzzle
States:
 a state description specifies the location of each of the
eight tiles and blank in one of the nine squares
Initial State:
 Any state in state space
Successor function:
 the blank moves Left, Right, Up, or Down
Goal test:
 current state matches the goal configuration
Path cost:
 each step costs 1, so the path cost is just the length of
the path
Solving Problems by Searching
The 8-queens
There are two ways to formulate the problem
All of them have the common followings:
 Goal test: 8 queens on board, not attacking to
each other
 Path cost: zero
Solving Problems by Searching
The 8-queens
(1) Incremental formulation
 involves operators that augment the
state description starting from an empty
state
 Each action adds a queen to the state
 States:
 any arrangement of 0 to 8 queens on board
 Successor function:
 add a queen to any empty square
Solving Problems by Searching
The 8-queens
(2) Complete-state formulation
 starts with all 8 queens on the board
 move the queens individually around
 States:
 any arrangement of 8 queens, one per
column in the leftmost columns
 Operators: move an attacked queen to a
row, not attacked by any other
Solving Problems by Searching
The 8-queens
Conclusion:
 the right formulation makes a big difference
to the size of the search space
Solving Problems by Searching
Example: River Crossing
Items: Man, Wolf, Corn, Chicken.
Man wants to cross river with all items.
 Wolf will eat Chicken
 Chicken will eat corn.
 Boat will take max of two.
Solving Problems by Searching
Searching for solutions
Solving Problems by Searching
3.3 Searching for solutions
Finding out a solution is done by
 searching through the state space
All problems are transformed
 as a search tree
 generated by the initial state and
successor function
Solving Problems by Searching
Search tree
Initial state
 The root of the search tree is a search node
Expanding
 applying successor function to the current state
 thereby generating a new set of states
leaf nodes
 the states having no successors
Fringe : Set of search nodes that have not been
expanded yet.
Refer to next figure
Solving Problems by Searching
Tree search example
Solving Problems by Searching
Tree search example
Solving Problems by Searching
Search tree
The essence of searching
 in case the first choice is not correct
 choosing one option and keep others for
later inspection
Hence we have the search strategy
 which determines the choice of which
state to expand
 good choice  fewer work  faster
Important:
 state space ≠ search tree
Solving Problems by Searching
Search tree
State space
 has unique states {A, B}
 while a search tree may have cyclic
paths: A-B-A-B-A-B- …
A good search strategy should
avoid such paths
Solving Problems by Searching
Search tree
A node is having five components:
 STATE: which state it is in the state space
 PARENT-NODE: from which node it is
generated
 ACTION: which action applied to its
parent-node to generate it
 PATH-COST: the cost, g(n), from initial
state to the node n itself
 DEPTH: number of steps along the path
from the initial state
Solving Problems by Searching
Solving Problems by Searching
Measuring problem-solving performance
The evaluation of a search strategy
 Completeness:
 is the strategy guaranteed to find a solution when there
is one?
 Optimality:
 does the strategy find the highest-quality solution when
there are several different solutions?
 Time complexity:
 how long does it take to find a solution?
 Space complexity:
 how much memory is needed to perform the search?
Solving Problems by Searching
Measuring problem-solving performance
In AI, complexity is expressed in
 b, branching factor, maximum number of
successors of any node
 d, the depth of the shallowest goal node.
(depth of the least-cost solution)
 m, the maximum length of any path in the state
space
Time and Space is measured in
 number of nodes generated during the search
 maximum number of nodes stored in memory
Solving Problems by Searching
For effectiveness of a search algorithm
 we can just consider the total cost
 The total cost = path cost (g) of the solution
found + search cost
 search cost = time necessary to find the solution
Tradeoff:
 (long time, optimal solution with least g)
 vs. (shorter time, solution with slightly larger
path cost g)
Measuring problem-solving performance
Solving Problems by Searching
Uninformed search strategies
Solving Problems by Searching
Previous class
 Problem-solving agents
• goal-based agent that operate by finding sequences of actions to
achieve specific goals.
 Core functionality its ability to formulate a problem and then search for
a solution.
 follows a "formulate, search, execute" cycle:
 Formulate Goal: The agent identifies what it wants to achieve.
 Formulate Problem: The agent defines the problem in terms of
its current state and the desired goal.
 Search: The agent employs a search algorithm to explore
possible actions that can lead to the goal.
 Execution Phase: Once a sequence of actions is determined, the
agent executes these actions one at a time
until the goal is achieved.
Solving Problems by Searching
A problem can be formally defined by five components:
1.Initial State: The starting condition from which the agent begins its
task.
2.Actions: The set of possible actions available from any given state.
3.Transition Model: A description of how each action affects the current
state, often represented by a function that maps states and actions to
resulting states.
4.Goal State: The condition that defines success for the agent.
5.Path Cost: A measure that evaluates the total cost associated with
reaching the goal from the initial state.
Uninformed search strategies
Uninformed search
 no information about the number of steps
or the path cost from the current state to the goal
 search the state space blindly.
Informed search, or heuristic search
 a strategy that searches toward the goal,
based on the information from the current state so
far.
Solving Problems by Searching
Uninformed search strategies
uninformed search and informed search are two fundamental
categories of algorithms used to explore the search space and find
solutions to problems.
 The primary distinction between these two types lies in how they utilize information
during the search process.
Uninformed Search
 Uninformed search algorithms, also known as blind search, operate without any
additional knowledge about the goal state or the structure of the state space.
 They rely solely on the problem definition provided at the outset.
some key characteristics:
 No Heuristic Information: Uninformed search does not use heuristics or
any domain-specific knowledge to guide its exploration. It treats all paths
equally and explores them systematically.
 Examples: Common examples include:
 Breadth-First Search (BFS): Explores all nodes at the present depth prior to moving on to
nodes at the next depth level.
 Depth-First Search (DFS): Explores as far down one branch as possible before
backtracking.
Solving Problems by Searching
Uninformed search strategies
Breadth-first search
Uniform cost search
Depth-first search
Depth-limited search
Iterative deepening search
Bidirectional search
Solving Problems by Searching
Breadth-first search
The root node is expanded first (FIFO)
All the nodes generated by the root
node are then expanded
And then their successors and so on
Solving Problems by Searching
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
2 3
4 5
1
6 7
FRINGE = (1)
Solving Problems by Searching
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
FRINGE = (2, 3)
2 3
4 5
1
6 7
Solving Problems by Searching
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
FRINGE = (3, 4, 5)
2 3
4 5
1
6 7
Solving Problems by Searching
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
FRINGE = (4, 5, 6, 7)
2 3
4 5
1
6 7
Solving Problems by Searching
Breadth-first search (Analysis)
Breadth-first search
 Complete – find the solution eventually
 Optimal, if step cost is 1
The disadvantage
 if the branching factor of a node is
large,
 for even small instances (e.g., chess)
 the space complexity and the time
complexity are enormous
Solving Problems by Searching
Properties of breadth-first search
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd = O(bd)
Space? O(bd) (keeps every node in memory)
Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than time)
Solving Problems by Searching
Breadth-first search (Analysis)
assuming 10000 nodes can be processed per second, each with
1000 bytes of storage
Solving Problems by Searching
Uniform cost search
Breadth-first finds the shallowest goal state
 but not necessarily be the least-cost solution
 work only if all step costs are equal
Uniform cost search
 modifies breadth-first strategy
 by always expanding the lowest-cost node
 The lowest-cost node is measured by the path
cost g(n)
Solving Problems by Searching
Uniform cost search
the first found solution is guaranteed to be the cheapest
 least in depth
 But restrict to non-decreasing path cost
 Unsuitable for operators with negative cost
Solving Problems by Searching
Uniform-cost search
Expand least-cost unexpanded node
Implementation:
 fringe = queue ordered by path cost

Equivalent to breadth-first if step costs all equal
Complete? Yes, if step cost ≥ ε
Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C*
is the cost of the optimal solution
Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
Optimal? Yes – nodes expanded in increasing order of g(n)
let
C* be the cost of optimal solution.
ε is possitive constant (every action cost)
Solving Problems by Searching
Uniform-cost search
Depth-first search
Always expands one of the nodes at the
deepest level of the tree
Only when the search hits a dead end
 goes back and expands nodes at shallower levels
 Dead end  leaf nodes but not the goal
Backtracking search
 only one successor is generated on expansion
 rather than all successors
 fewer memory
Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front
Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
Expand deepest unexpanded node
Implementation:
 fringe = LIFO queue, i.e., put successors at front

Solving Problems by Searching
Depth-first search
S
A D
B D A E
C E E B B F
D F B F C E A C G
G C G F
14
19 19 17
17 15 15 13
G 25
11
Solving Problems by Searching
Depth-first search (Analysis)
Not complete
 because a path may be infinite or looping
 then the path will never fail and go back try
another option
Not optimal
 it doesn't guarantee the best solution
It overcomes
 the time and space complexities
Solving Problems by Searching
Properties of depth-first search
Complete? No: fails in infinite-depth spaces, spaces
with loops
 Modify to avoid repeated states along path

 complete in finite spaces
Time? O(bm): terrible if m is much larger than d
 but if solutions are dense, may be much faster than breadth-
first

Space? O(bm), i.e., linear space!
Optimal? No
Solving Problems by Searching
Depth-Limited Strategy
Depth-first with depth cutoff k (maximal
depth below which nodes are not
expanded)
Three possible outcomes:
 Solution
 Failure (no solution)
 Cutoff (no solution within cutoff)
Solving Problems by Searching
Depth-limited search
It is depth-first search
 with a predefined maximum depth
 However, it is usually not easy to define
the suitable maximum depth
 too small  no solution can be found
 too large  the same problems are
suffered from
Anyway the search is
 complete
 but still not optimal
Solving Problems by Searching
Depth-limited search
S
A D
B D A E
C E E B B F
D F B F C E A C G
G C G F
14
19 19 17
17 15 15 13
G 25
11
depth = 3
3
6
Solving Problems by Searching
Iterative deepening search
No choosing of the best depth limit
It tries all possible depth limits:
 first 0, then 1, 2, and so on
 combines the benefits of depth-first and
breadth-first search
Solving Problems by Searching
Iterative deepening search
Solving Problems by Searching
Iterative deepening search
(Analysis)
optimal
complete
Time and space complexities
 reasonable
suitable for the problem
 having a large search space
 and the depth of the solution is not known
Solving Problems by Searching
Properties of iterative deepening search
Complete? Yes
Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd =
O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1
Solving Problems by Searching
Iterative lengthening search
IDS is using depth as limit
ILS is using path cost as limit
 an iterative version for uniform cost search
has the advantages of uniform cost search
 while avoiding its memory requirements
 but ILS incurs substantial overhead
 compared to uniform cost search
Solving Problems by Searching
Bidirectional search
Run two simultaneous searches
 one forward from the initial state another
backward from the goal
 stop when the two searches meet
However, computing backward is difficult
 A huge amount of goal states
 at the goal state, which actions are used to
compute it?
 can the actions be reversible to computer its
predecessors?
Solving Problems by Searching
Bidirectional Strategy
2 fringe queues: FRINGE1 and FRINGE2
Time and space complexity = O(bd/2) << O(bd)
Solving Problems by Searching
Bidirectional search
S
A D
B D A E
C E E B B F
D F B F C E A C G
G C G F
14
19 19 17
17 15 15 13
G 25
11
Forward
Backwards
Solving Problems by Searching
Solving Problems by Searching
Comparing search strategies
Solving Problems by Searching
Avoiding repeated states
for all search strategies
 There is possibility of expanding states
 that have already been encountered and expanded before, on
some other path
 may cause the path to be infinite  loop forever
 Algorithms that forget their history
 are doomed to repeat it
Solving Problems by Searching
Avoiding repeated states
Three ways to deal with this possibility
 Do not return to the state it just came from
 Refuse generation of any successor same as its parent state
 Do not create paths with cycles
 Refuse generation of any successor same as its ancestor
states
 Do not generate any generated state
 Not only its ancestor states, but also all other expanded states
have to be checked against
Solving Problems by Searching
Avoiding repeated states
We then define a data structure
 closed list:
a set storing every expanded node so far
 If the current node matches a node on the
closed list, discard it.
Solving Problems by Searching

AI Chapter Two Solving problems by searching.pdf

  • 1.
    College of Engineering Departmentof Software Engineering Chapter Two Solving Problems by Searching 1 Dr. Zeleke A. AASTU, October 2024
  • 2.
    Outlines 1. Problem SolvingAgents 2. Search 3. Uninformed search algorithms 4. Informed search algorithms 5. Constraint Satisfaction Problem Solving Problems by Searching
  • 3.
    Previous class Agents areused to provide a consistent viewpoint on various topics in the field of AI.  Problem-solving agents are a type of goal-based agent that operate by finding sequences of actions to achieve specific goals. Essential concepts:  Agents interact with environment by means of sensors and actuators.  A rational agent does “the right thing” ≡ maximizes a performance measure ➨ PEAS • Environment types: observable, deterministic, episodic, static, discrete • Agent types: table driven (rule based), simple reflex, model- based reflex, goal-based, utility-based, learning agent Solving Problems by Searching
  • 4.
    Structure of Agents Agent= Architecture + Program • Architecture • operating platform of the agent • computer system, specific hardware, possibly OS • Program • function that implements the mapping from percepts to actions Solving Problems by Searching
  • 5.
    Reflex agent issimple  base their actions on  a direct mapping from states to actions  but cannot work well in environments  which this mapping would be too large to store  and would take too long to learn Hence, goal-based agent is used Solving Problems by Searching
  • 6.
    Problem-solving agent Problem-solving agent A kind of goal-based agent  It solves problem by  finding sequences of actions that lead to desirable states (goals)  To solve a problem,  the first step is the goal formulation, based on the current situation Solving Problems by Searching
  • 7.
    Assume our agentsalways have access to information about the world, such as the map in Figure 3.1. With that information, the agent can follow this four-phase problem-solving process: 1. Goal formulation: The agent adopts the goal of reaching Bucharest.  Goals organize behavior by limiting the objectives and hence the actions to be considered. 2. Problem formulation: The agent devises a description of the states and actions necessary to reach the goal—an abstract model of the relevant part of the world. • For our agent, one good model is to consider the actions of traveling from one city to an adjacent city, and therefore the only fact about the state of the world that will change due to an action is the current city. 3. Search: Before taking any action in the real world, the agent simulates sequences of actions in its model, searching until it finds a sequence of actions that reaches the goal. Such a sequence is called a solution. The agent might have to simulate multiple sequences that do not reach the goal, but eventually it will find a solution (such as going from Arad to Sibiu to Fagaras to Bucharest), or it will find that no solution is possible. 4. Execution: The agent can now execute the actions in the solution, one at a time.
  • 8.
    Goal formulation The goalis formulated  as a set of world states, in which the goal is satisfied Reaching from initial state  goal state  Actions are required Actions are the operators  causing transitions between world states  Actions should be abstract enough at a certain degree, instead of very detailed  E.g., turn left VS turn left 30 degree, etc. Solving Problems by Searching
  • 9.
    Problem formulation The processof deciding  what actions and states to consider E.g., driving Addis Ababa  Hawasa  in-between states and actions defined  States: Some places in Addis & Hawasa  Actions: Turn left, Turn right, go straight, accelerate & brake, etc. Solving Problems by Searching
  • 10.
    Search Because there aremany ways to achieve the same goal  Those ways are together expressed as a tree  Multiple options of unknown value at a point,  the agent can examine different possible sequences of actions, and choose the best  This process of looking for the best sequence is called search  The best sequence is then a list of actions, called solution Solving Problems by Searching
  • 11.
    Search algorithm Defined as taking a problem  and returns a solution Once a solution is found  the agent follows the solution  and carries out the list of actions – execution phase Design of an agent  “Formulate, search, execute” Solving Problems by Searching
  • 12.
  • 13.
    Well-defined problems andsolutions A problem is defined by 5 components: Initial state Actions Transition model or Successor functions Goal Test Path Cost Solving Problems by Searching
  • 14.
    Well-defined problems andsolutions A problem is defined by 5 components:  The initial state  that the agent starts in  The set of possible actions  Transition model: description of what each action does. (successor functions): refer to any state reachable from given state by a single action  Initial state, actions and Transition model define the state space  the set of all states reachable from the initial state by any sequence of actions.  A path in the state space:  any sequence of states connected by a sequence of actions. Solving Problems by Searching
  • 15.
    Well-defined problems andsolutions The goal test  Applied to the current state to test  if the agent is in its goal -Sometimes there is an explicit set of possible goal states. (example: in Hawasa). -Sometimes the goal is described by the properties  instead of stating explicitly the set of states  Example: Chess  the agent wins if it can capture the KING of the opponent on next move ( checkmate).  no matter what the opponent does Solving Problems by Searching
  • 16.
    Well-defined problems andsolutions A path cost function,  assigns a numeric cost to each path  = performance measure  denoted by g  to distinguish the best path from others Usually the path cost is  the sum of the step costs of the individual actions (in the action list) Solving Problems by Searching
  • 17.
    Well-defined problems andsolutions Together a problem is defined by  Initial state  Actions  Successor function  Goal test  Path cost function The solution of a problem is then  a path from the initial state to a state satisfying the goal test Optimal solution  the solution with lowest path cost among all solutions Solving Problems by Searching
  • 18.
    Formulating problems Besides thefive components for problem formulation  anything else? Abstraction  the process to take out the irrelevant information  leave the most essential parts to the description of the states ( Remove detail from representation)  Conclusion: Only the most important parts that are contributing to searching are used Solving Problems by Searching
  • 19.
    Evaluation Criteria Formulation ofa problem as search task Basic search strategies Important properties of search strategies Selection of search strategies for specific tasks (The ordering of the nodes in FRINGE defines the search strategy) Solving Problems by Searching
  • 20.
    Problem-Solving Agents agents whosetask is to solve a particular problem (steps)  goal formulation  what is the goal state  what are important characteristics of the goal state  how does the agent know that it has reached the goal  are there several possible goal states  are they equal or are some more preferable  problem formulation  what are the possible states of the world relevant for solving the problem  what information is accessible to the agent  how can the agent progress from state to state Solving Problems by Searching
  • 21.
    Example: Romania On holidayin Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal:  be in Bucharest  Formulate problem:  states: various cities  actions: drive between cities Find solution:  sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest  Solving Problems by Searching
  • 22.
  • 23.
    Single-state problem formulation Aproblem is defined by four items: 1.Initial state e.g., "at Arad" 2.Actions or successor function S(x) = set of action–state pairs  e.g., S(Arad) = {<Arad  Zerind, Zerind>, … } 3.Goal test, can be  explicit, e.g., x = "at Bucharest"  implicit, e.g., Checkmate(x) 4.Path cost (additive)  e.g., sum of distances, number of actions executed, etc.  c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state Solving Problems by Searching
  • 24.
    Example problems Toy problems those intended to illustrate or exercise various problem-solving methods  E.g., puzzle, chess, etc. Real-world problems  tend to be more difficult and whose solutions people actually care about  E.g., Design, planning, etc. Solving Problems by Searching
  • 25.
    Toy problems Example: vacuumworld Number of states: 8 Initial state: Any Number of actions: 4  left, right, suck, noOp Goal: clean up all dirt  Goal states: {7, 8}  Path Cost:  Each step costs 1 Solving Problems by Searching
  • 26.
  • 27.
  • 28.
    The 8-puzzle States:  astate description specifies the location of each of the eight tiles and blank in one of the nine squares Initial State:  Any state in state space Successor function:  the blank moves Left, Right, Up, or Down Goal test:  current state matches the goal configuration Path cost:  each step costs 1, so the path cost is just the length of the path Solving Problems by Searching
  • 29.
    The 8-queens There aretwo ways to formulate the problem All of them have the common followings:  Goal test: 8 queens on board, not attacking to each other  Path cost: zero Solving Problems by Searching
  • 30.
    The 8-queens (1) Incrementalformulation  involves operators that augment the state description starting from an empty state  Each action adds a queen to the state  States:  any arrangement of 0 to 8 queens on board  Successor function:  add a queen to any empty square Solving Problems by Searching
  • 31.
    The 8-queens (2) Complete-stateformulation  starts with all 8 queens on the board  move the queens individually around  States:  any arrangement of 8 queens, one per column in the leftmost columns  Operators: move an attacked queen to a row, not attacked by any other Solving Problems by Searching
  • 32.
    The 8-queens Conclusion:  theright formulation makes a big difference to the size of the search space Solving Problems by Searching
  • 33.
    Example: River Crossing Items:Man, Wolf, Corn, Chicken. Man wants to cross river with all items.  Wolf will eat Chicken  Chicken will eat corn.  Boat will take max of two. Solving Problems by Searching
  • 34.
    Searching for solutions SolvingProblems by Searching
  • 35.
    3.3 Searching forsolutions Finding out a solution is done by  searching through the state space All problems are transformed  as a search tree  generated by the initial state and successor function Solving Problems by Searching
  • 36.
    Search tree Initial state The root of the search tree is a search node Expanding  applying successor function to the current state  thereby generating a new set of states leaf nodes  the states having no successors Fringe : Set of search nodes that have not been expanded yet. Refer to next figure Solving Problems by Searching
  • 37.
    Tree search example SolvingProblems by Searching
  • 38.
    Tree search example SolvingProblems by Searching
  • 39.
    Search tree The essenceof searching  in case the first choice is not correct  choosing one option and keep others for later inspection Hence we have the search strategy  which determines the choice of which state to expand  good choice  fewer work  faster Important:  state space ≠ search tree Solving Problems by Searching
  • 40.
    Search tree State space has unique states {A, B}  while a search tree may have cyclic paths: A-B-A-B-A-B- … A good search strategy should avoid such paths Solving Problems by Searching
  • 41.
    Search tree A nodeis having five components:  STATE: which state it is in the state space  PARENT-NODE: from which node it is generated  ACTION: which action applied to its parent-node to generate it  PATH-COST: the cost, g(n), from initial state to the node n itself  DEPTH: number of steps along the path from the initial state Solving Problems by Searching
  • 42.
  • 43.
    Measuring problem-solving performance Theevaluation of a search strategy  Completeness:  is the strategy guaranteed to find a solution when there is one?  Optimality:  does the strategy find the highest-quality solution when there are several different solutions?  Time complexity:  how long does it take to find a solution?  Space complexity:  how much memory is needed to perform the search? Solving Problems by Searching
  • 44.
    Measuring problem-solving performance InAI, complexity is expressed in  b, branching factor, maximum number of successors of any node  d, the depth of the shallowest goal node. (depth of the least-cost solution)  m, the maximum length of any path in the state space Time and Space is measured in  number of nodes generated during the search  maximum number of nodes stored in memory Solving Problems by Searching
  • 45.
    For effectiveness ofa search algorithm  we can just consider the total cost  The total cost = path cost (g) of the solution found + search cost  search cost = time necessary to find the solution Tradeoff:  (long time, optimal solution with least g)  vs. (shorter time, solution with slightly larger path cost g) Measuring problem-solving performance Solving Problems by Searching
  • 48.
  • 49.
    Previous class  Problem-solvingagents • goal-based agent that operate by finding sequences of actions to achieve specific goals.  Core functionality its ability to formulate a problem and then search for a solution.  follows a "formulate, search, execute" cycle:  Formulate Goal: The agent identifies what it wants to achieve.  Formulate Problem: The agent defines the problem in terms of its current state and the desired goal.  Search: The agent employs a search algorithm to explore possible actions that can lead to the goal.  Execution Phase: Once a sequence of actions is determined, the agent executes these actions one at a time until the goal is achieved. Solving Problems by Searching
  • 50.
    A problem canbe formally defined by five components: 1.Initial State: The starting condition from which the agent begins its task. 2.Actions: The set of possible actions available from any given state. 3.Transition Model: A description of how each action affects the current state, often represented by a function that maps states and actions to resulting states. 4.Goal State: The condition that defines success for the agent. 5.Path Cost: A measure that evaluates the total cost associated with reaching the goal from the initial state.
  • 51.
    Uninformed search strategies Uninformedsearch  no information about the number of steps or the path cost from the current state to the goal  search the state space blindly. Informed search, or heuristic search  a strategy that searches toward the goal, based on the information from the current state so far. Solving Problems by Searching
  • 52.
    Uninformed search strategies uninformedsearch and informed search are two fundamental categories of algorithms used to explore the search space and find solutions to problems.  The primary distinction between these two types lies in how they utilize information during the search process. Uninformed Search  Uninformed search algorithms, also known as blind search, operate without any additional knowledge about the goal state or the structure of the state space.  They rely solely on the problem definition provided at the outset. some key characteristics:  No Heuristic Information: Uninformed search does not use heuristics or any domain-specific knowledge to guide its exploration. It treats all paths equally and explores them systematically.  Examples: Common examples include:  Breadth-First Search (BFS): Explores all nodes at the present depth prior to moving on to nodes at the next depth level.  Depth-First Search (DFS): Explores as far down one branch as possible before backtracking. Solving Problems by Searching
  • 53.
    Uninformed search strategies Breadth-firstsearch Uniform cost search Depth-first search Depth-limited search Iterative deepening search Bidirectional search Solving Problems by Searching
  • 54.
    Breadth-first search The rootnode is expanded first (FIFO) All the nodes generated by the root node are then expanded And then their successors and so on Solving Problems by Searching
  • 55.
    Breadth-First Strategy New nodesare inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (1) Solving Problems by Searching
  • 56.
    Breadth-First Strategy New nodesare inserted at the end of FRINGE FRINGE = (2, 3) 2 3 4 5 1 6 7 Solving Problems by Searching
  • 57.
    Breadth-First Strategy New nodesare inserted at the end of FRINGE FRINGE = (3, 4, 5) 2 3 4 5 1 6 7 Solving Problems by Searching
  • 58.
    Breadth-First Strategy New nodesare inserted at the end of FRINGE FRINGE = (4, 5, 6, 7) 2 3 4 5 1 6 7 Solving Problems by Searching
  • 59.
    Breadth-first search (Analysis) Breadth-firstsearch  Complete – find the solution eventually  Optimal, if step cost is 1 The disadvantage  if the branching factor of a node is large,  for even small instances (e.g., chess)  the space complexity and the time complexity are enormous Solving Problems by Searching
  • 60.
    Properties of breadth-firstsearch Complete? Yes (if b is finite) Time? 1+b+b2+b3+… +bd = O(bd) Space? O(bd) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time) Solving Problems by Searching
  • 61.
    Breadth-first search (Analysis) assuming10000 nodes can be processed per second, each with 1000 bytes of storage Solving Problems by Searching
  • 62.
    Uniform cost search Breadth-firstfinds the shallowest goal state  but not necessarily be the least-cost solution  work only if all step costs are equal Uniform cost search  modifies breadth-first strategy  by always expanding the lowest-cost node  The lowest-cost node is measured by the path cost g(n) Solving Problems by Searching
  • 63.
    Uniform cost search thefirst found solution is guaranteed to be the cheapest  least in depth  But restrict to non-decreasing path cost  Unsuitable for operators with negative cost Solving Problems by Searching
  • 64.
    Uniform-cost search Expand least-costunexpanded node Implementation:  fringe = queue ordered by path cost  Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) Optimal? Yes – nodes expanded in increasing order of g(n) let C* be the cost of optimal solution. ε is possitive constant (every action cost) Solving Problems by Searching
  • 65.
  • 66.
    Depth-first search Always expandsone of the nodes at the deepest level of the tree Only when the search hits a dead end  goes back and expands nodes at shallower levels  Dead end  leaf nodes but not the goal Backtracking search  only one successor is generated on expansion  rather than all successors  fewer memory Solving Problems by Searching
  • 67.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 68.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front Solving Problems by Searching
  • 69.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 70.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 71.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 72.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 73.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 74.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 75.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 76.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 77.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 78.
    Depth-first search Expand deepestunexpanded node Implementation:  fringe = LIFO queue, i.e., put successors at front  Solving Problems by Searching
  • 79.
    Depth-first search S A D BD A E C E E B B F D F B F C E A C G G C G F 14 19 19 17 17 15 15 13 G 25 11 Solving Problems by Searching
  • 80.
    Depth-first search (Analysis) Notcomplete  because a path may be infinite or looping  then the path will never fail and go back try another option Not optimal  it doesn't guarantee the best solution It overcomes  the time and space complexities Solving Problems by Searching
  • 81.
    Properties of depth-firstsearch Complete? No: fails in infinite-depth spaces, spaces with loops  Modify to avoid repeated states along path   complete in finite spaces Time? O(bm): terrible if m is much larger than d  but if solutions are dense, may be much faster than breadth- first  Space? O(bm), i.e., linear space! Optimal? No Solving Problems by Searching
  • 82.
    Depth-Limited Strategy Depth-first withdepth cutoff k (maximal depth below which nodes are not expanded) Three possible outcomes:  Solution  Failure (no solution)  Cutoff (no solution within cutoff) Solving Problems by Searching
  • 83.
    Depth-limited search It isdepth-first search  with a predefined maximum depth  However, it is usually not easy to define the suitable maximum depth  too small  no solution can be found  too large  the same problems are suffered from Anyway the search is  complete  but still not optimal Solving Problems by Searching
  • 84.
    Depth-limited search S A D BD A E C E E B B F D F B F C E A C G G C G F 14 19 19 17 17 15 15 13 G 25 11 depth = 3 3 6 Solving Problems by Searching
  • 85.
    Iterative deepening search Nochoosing of the best depth limit It tries all possible depth limits:  first 0, then 1, 2, and so on  combines the benefits of depth-first and breadth-first search Solving Problems by Searching
  • 86.
  • 87.
    Iterative deepening search (Analysis) optimal complete Timeand space complexities  reasonable suitable for the problem  having a large search space  and the depth of the solution is not known Solving Problems by Searching
  • 88.
    Properties of iterativedeepening search Complete? Yes Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) Space? O(bd) Optimal? Yes, if step cost = 1 Solving Problems by Searching
  • 89.
    Iterative lengthening search IDSis using depth as limit ILS is using path cost as limit  an iterative version for uniform cost search has the advantages of uniform cost search  while avoiding its memory requirements  but ILS incurs substantial overhead  compared to uniform cost search Solving Problems by Searching
  • 90.
    Bidirectional search Run twosimultaneous searches  one forward from the initial state another backward from the goal  stop when the two searches meet However, computing backward is difficult  A huge amount of goal states  at the goal state, which actions are used to compute it?  can the actions be reversible to computer its predecessors? Solving Problems by Searching
  • 91.
    Bidirectional Strategy 2 fringequeues: FRINGE1 and FRINGE2 Time and space complexity = O(bd/2) << O(bd) Solving Problems by Searching
  • 92.
    Bidirectional search S A D BD A E C E E B B F D F B F C E A C G G C G F 14 19 19 17 17 15 15 13 G 25 11 Forward Backwards Solving Problems by Searching
  • 93.
  • 94.
  • 95.
    Avoiding repeated states forall search strategies  There is possibility of expanding states  that have already been encountered and expanded before, on some other path  may cause the path to be infinite  loop forever  Algorithms that forget their history  are doomed to repeat it Solving Problems by Searching
  • 96.
    Avoiding repeated states Threeways to deal with this possibility  Do not return to the state it just came from  Refuse generation of any successor same as its parent state  Do not create paths with cycles  Refuse generation of any successor same as its ancestor states  Do not generate any generated state  Not only its ancestor states, but also all other expanded states have to be checked against Solving Problems by Searching
  • 97.
    Avoiding repeated states Wethen define a data structure  closed list: a set storing every expanded node so far  If the current node matches a node on the closed list, discard it. Solving Problems by Searching