KEMBAR78
UNIT 2 (Problem Solving Agent) | PDF | Computational Problems | Discrete Mathematics
0% found this document useful (0 votes)
81 views25 pages

UNIT 2 (Problem Solving Agent)

Problem-solving agents in artificial intelligence operate in three phases: problem formulation, search, and execution. Problem formulation involves defining the initial state, actions, transition model, goal test, and path cost. Search algorithms are categorized into uninformed and informed types, with various strategies such as breadth-first search and A* search, each having distinct properties like completeness, optimality, time, and space complexity.

Uploaded by

PavaniPaladugu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views25 pages

UNIT 2 (Problem Solving Agent)

Problem-solving agents in artificial intelligence operate in three phases: problem formulation, search, and execution. Problem formulation involves defining the initial state, actions, transition model, goal test, and path cost. Search algorithms are categorized into uninformed and informed types, with various strategies such as breadth-first search and A* search, each having distinct properties like completeness, optimality, time, and space complexity.

Uploaded by

PavaniPaladugu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Problem Solving Agents

Agents solve the problem in 3 phases. They are ijformulate i)soarch iü)execute
Problem formulation is the process of
given a goal.
deciding what actions and states to consider,
Searching the process of looking for a sequence of actions that reaches the goal
Execution is carrying out the solution which is found in
searching process
Problem Fomulation
A problem can be defined formally by five components
Initial State [s]
Actions [Actions(s)]
Transition Model [Result(s,a)]
Goal Test [s]
Path cost function [c(s,a,s")]
Problem formulation for Vacuum Cleaner

No
F10 The Stole paa Y A VGCCwm wmld, AvCy
iMnitial state can be anyof 8.states A w: L =Left , R:R, S-SUCK.
2x2 .
i)Actions are left,right,suck

ii)Transition model is represented by the above pictue

iv)Goal states are cleaned tile states

v)Path cost is uniform (i.e.equal)


Search Algorithms in Artificial
Intelligence
Search algorithms are one of the most important of Artificial Intelligence. This
all about the search areas topic will explain
algorithms in Al
Problem-solving agents:
In Artificial Intelligence, Search techniques universal
agents or Problem-solving agents in Al problem-solving methods. Rational
are

mostly
specific problem and provide the best result.
used these search strategies or algorithms to
solvea
use atomic Problem-solving agents are the
goal-based agents and
representation. In this topic, we will learn various problem-solving search
algorithms.
Search Algorithm Terminologies:
Search: Searchingis a step by step procedure to solve a
search-problem in a given search space.A
search problem can have three main factors:
a. Search Space: Search space represents a set of
possible solutions, which a system may have.
b. Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the
goal state is
achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the
root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the
agent.
Transition model: A description of what each action do, be
can represented as a transition model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:


Fllowing are the four essential properties of search algorithms to compare the efficiency of these
algorithms:
Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at
least any solution exists for any random input

Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path
cost) among all other solutions, then such a solution for is said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.

Space Complexity: It is the maximum storage space required at any point during the search, as the
complexity of the problem
3

Types of search algorithms


Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.

Search Algorithm

Uniformed/Blind Informed Search


Breadth first search Best First Search

Uniform cost search Asearch

Depth first search

Depth limited search

terative deeping depth


first search
Bidirectional search

Uninformed/Blind Search:
such as closeness, the location of the
The uninformed search does not contain any domain knowledge
includes information about how to traverse the tree
goal. It operates in a brute-force way as it only is
Uninformed search applies a way in which search tree
and how to identify leaf and goal nodes.
like initial state operators and test for the
searched without any information about the search space
each node of the tree until it achieves the goal node.
goal, so it is also called blind search.lt examines
It can be divided into five main types:

oBreadth-first search
o Uniform cost search

o Depth-first search

o Iterative deepening depth-first search


o Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem information is
available which can guide the search. Informed search strategies can find a solution more efficiently
than an uninformed search strategy. Informed search is also called a Heuristic search.

best solutions but guaranteed to find a


A heuristic is a way which might not always be guaranteed for
good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem

1. Greedy Search

2. A* Search

Uninformed Search Algorithms


Uninformed search is a class of general-purpose search algorithms which operates in brute
force-way. Uninformed search algorithms do not have additional information about state or
search space other than how to traverse the tree, so it is also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search

2. Depth-first Search
3. Depth-imited Search
4. Iterative deepening depth-first search

5. Uniform cost search


6. Bidirectional Search

1. Breadth-first Search:
Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm

searches breadthwise in a tree or graph, so it is called breadth-first search.

of the tree and expands all successor node at the


starts searching from the root node
o BFS algorithm
current level before moving to nodes of next level.

algorithm is example of a general-graph search algorithm.


The breadth-first search an

implemented using FIFO queue data


structure.
Breadth-first search

Advantages:

o BFS will provide a solution if any solution exists.


o
If there are
more than one
solutions for
which requires the least number of
a
given problem, then BFS will
steps provide the minimal solution

Disadvantages:
o It
requires lots of
memory since each level
level of the tree must be
saved into
memory to expand the next
o BFS needs lots of
time if the solution is
far away from the
root node
Example:
In the below
tree structure, we have shown the
root node S to
goal node K. BFS traversing of the tree using BFS
shown by the dotted algorithm traverse in layers, so it will followalgorithm
arrow, and the traversed
search from the
the path which is
path will be
1. S->A->B-C--->D->G-->H->E-->F--->---->K

Breadth First Search

Level o
B
Level 1

D Level2
Level 3

Level4
Time Complexity{Time Complexity BFS algorithm can be obtained by the number of nodes
of
traversed in BFS until the shaliowest Node.
at every state.
Where the d= depth of shallowest solution and b is a node

T (b) =
1+b*+b'+...+ b°= O (b)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which
is Ofb)

Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then
BFS will fnd a solution
of the depth of the node.
function
cost is a non-decreasing
is optimal if path
Optimality: BFS

2. Depth-first Search
structure.
a tree or graph data
recursive algorithm for
traversing to its
oDepth-first search
isa
node and follows
each path
from the root
starts
search because it
It is called the depth-first
next path.
before moving to the
node
greatest depth
stack data structure for its implementation.
DFS uses a
similar to the BFS algorithm.
The process of the DFS algorithm is
solutions using recursion.
is an algorithm technique for finding all possible
Note: Becktraoking

Advantage:
nodes on the path from root
it only needs to store a stack of the
o DFS requires very less memory as

node to the current node.


the right path).
It takes less time to reach to the goal node than BFS algorithm (if it traverses in
o

Disadvantage:
that many states keep re-occurring, and there is no guarantee of finding the
o There is the possibility
solution
down searching and sometime it may go to the infinite loop.
DFS algorithm goes for deep

Example:
follow the order
depth-first search, and it will
as:
In the below search tree, we have shown the flow of

Root node--->Left node--> right node.

after traversing E, it wili


It will start from root node S, and traverse A, then B, then D and E,
searching
node is not found. After backtracking it
backtrack the tree as E has no other successor and still goal
as it found goal node.
will traverse node C and then G, and here it will terminate

Depth First Search

evelo

Levela
Completeness: DFS search algorithm is complete within finite state space as it will expand every node
within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorith
It is given by
T(n)= 1+ n?+ n* *...+ n=O(n")

Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high
cost to reach to the goal node.

3. Depth-Limited Search Algorithm:


A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth
limited search can solve the drawback of the infinite path in the Depth-first search. In this algorithm,
the node at the depth limit will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:


oStandard failure value: It indicates that problem does not have any solution
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:
Depth-limited search is Memory efficient.

Disadvantages:
Depth-limited search also has a disadvantage of incompleteness.

o It may not be optimal if the problem has more than one solution.

Example:
Depth Limited Search

vel o

Level

1evel s
Duajajduo
if the solution is above the depth-limit.
Completeness: DLS search algorithm is complete

of DLS algorithm is O(bb)


Time Complexity: Time complexity

of DLS algorithm is O(bx)


Space Complexity: Space complexity
of DFS, and it is also not optimal
even

be viewed as a special case


Optimal: Depth-limited search
can

if d .

4. Uniform-cost Search Algorithm:


tree or graph. This
used for traversing a weighted
Uniform-cost search is a searching algorithm of the
a different cost is
available for each edge. The primary goal
algorithm comes into play when Uniform-
cumulative cost.
node which has the lowest
uniform-cost search is to find a path to the goal to solve
to their path costs form
the root node. It can be used
cost search expands nodes according
uniform-cost search algorithm is implemented
cost is in demand. A
any graph/tree where the optimal cumulative cost. Uniform cost search
is
It gives maximum priority to the lowest
by the priority queue.
to BFS algorithm if the path cost of
all edges is the same.
equivalent

Advantages:
because at every state the path with the least cost is chosen.
o Uniform cost search is optimal

Disadvantages:
concerned about path cost
about the number of steps
involve in searching and only
It does not care

may be stuck in
an infinite loop.
Due to which this algorithm

Example Uniform Cost Search

Level o

B Leveli

Level 2
C

E Level s
5
Level +
G
Completeness:
Uniform-cost search i mplete, such as if there is a solution, UCS will find it.
Time Complexity:

ne
mb optimal solution, and e is each step to get closer to the goal node. Then the
Seps is
=C*/£+1. Here we have taken +1, as we start from state 0 and end to C"/E.
ence, the
worst-case time complexity of Uniform-cost search iso(b' C)
Space Complexity:
Sdme logic is for space complexity so, the worst-case space complexity of Uniform-cost searcn
is O(bC/l)
Optimal:
Onitorm-cost search is always optimal it only selects a path with the lowest
as path cost
5. Iterative deepening depth-first Search:
he iterative deepening algorithm is a combination of DES and BFS algorithms. This search algorithm
finds out the best depth limit and does it by gradually increasing the limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the
depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first
search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is large, and depth of
goal node is unknown.

Advantages:
o Itcombines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:
Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm
performs various iterations until it does not find the goal node. The iteration performed by the
algorithm is given as:
first search
Iterative deepening depth

Level o

B C Level i

Level

H Level3

1'st Iteration---->
2'nd Iteration-- A B,
3'rd Iteration------>A, B, D C G
4'th Iteration------>A, B, D, H, C F. G
In the fourth iteration, the algorithm will find the goal node.

Completeness:
This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(b").

Space Complexity:
The space complexity of 1DDFS will be O(bd)

Optimal:

IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.

6. Bidirectional Search Algorithm:


Bidirectional search algorithm runs two simultaneous searches, one form initial state called as
forward-search and other from goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.
Bidirectional search can use
search techniques such BFS, DFS, DL
as
e
Advantages:
o
Bidirectional search is fast.
o
Bidirectional search requires less memory

Disadvantages:
o Implementation of the bidirectional search tree is difficult.

o In bidirectional search, one should know the goal state in advance

Example:
divides one
In the below search tree, bidirectional search algorithm is applied. This algorithm
direction and starts
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward
from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

Bidirectional Search

Root node

13

14
2

3
Intersection
Node
5 6
Goal node

Completeness: Bidirectional Search is complete if we use BFS in both searches.

Time Complexity: Time complexity of bidirectional search using BFS is O(b)

Space Complexity: Space complexity of bidirectional search is O(b)

Optimal: Bidirectional search is Optimal


Informed Search Algorithms
S0 far we have talked about the uninformed search algorithms which looked through search space for

all possible solutions of the problem without having any additional knowledge about search space
Dut intormed search algorithm contains an array of knowledge such as how far we are from the goal,
path cost, how to reach to goal node, etc. This knowledge help agents to explore less to the search
space and find more efficiently the goal node.

The informed search algorithm is more useful for large search space. Informed search algorithm uses
the idea of heuristic, so it is also called Heuristic search.

Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most
promising path. It takes the current state of the agent as its input and produces the estimation of how
close agent is from the
goal. The heuristic method, however, might not always give the best solution,
but it guaranteed to find a
good solution in reasonable time. Heuristic function estimates how close a
state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path between the
pair of states. The value of the heuristic function is always positive.
Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence
heuristic cost should be less
than or equal to the estimated cost.

Pure Heuristic Search:


Pure heuristic search is the simplest form of heuristic search
algorithms. It expands nodes based on
their heuristic value h(n). It maintains two lists, OPEN and CLOSED list. In the CLOSED
Iist, it places
those nodes which have already expanded and in the OPEN list, it places nodes which have yet not
been expanded.

On each iteration, each noden with the lowest heuristic value is expanded and generates all its
successors and n is placed to the closed list. The algorithm continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are given below
oBest First Search Algorithm(Greedy search)

o A Search Algorithm

1.) Best-first Search Algorithm (Greedy Search):


Greedy best-first search algorithm always selects the path which appears best at that moment. It is
the combination of depth-first search and breadth-first search algorithms. It uses the heuristic
function and search. Best-first search allows us to take the advantages of both algorithms. With the
help of best-first search, at each step, we can choose the most promising node. In the best first search
3

algorithm, we expand the node which is closest to the goal node and the closest cost is estumded uy
heuristic function, i.e.

1. f(n)= h(n).

Were, h(n)= estimated cost from node to the


n
goal
The
greedy best first algorithm is implemented by the priority queue
Best first search
algorithm:
o
Step 1: Place the starting node into the OPEN list.
o Step 2: If the OPEN list is empty, Stop and return failure.

oStep 3: Remove the node n, from the OPEN list which has the lowestvalue of h(n), and places it in the

CLOSED list
o Step 4: Expand the node n, and generate the successors of node n.

o Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step 6.

o Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the
node has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the

OPEN list.

o Step 7: Return to Step 2.

Advantages:
oBest first search can switch between BFS and DFS by gaining the advantages of both the algorithms.

oThis algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
oItcan behave as an unguided depth-first search in the worst case scenario.
It can get stuck in a loop as DFS.

This algorithm is not.optimal.

Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each
iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below
table
S node H(n)
12
B B 4

D
F E
F
H

H 13
S
G

In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the
iteration for traversing the above example.

12 A B 4

E F

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B)

Iteration 2: Open
E A Closed S
Open [E. A}, Closed [S, B, F]
S
Iteration 3: Open
:Open II, E, A), Closed S, B, G, Al Closed B
F, G]
Hence the final
solution path will be: S---->
B->F---> G
Time
Complexity: The worst case time
complexity of Greedy best first search is
Space Olp)
the
Complexity: The worst case space complexity of
maximum depth of the search Greedy best first search is O(b"). Where, m
space.
Complete: Greedy best-first search is also
incomplete, even if the given state space is finite.
Optimal: Greedy best first search algorithm is not optimal.
2.) A Search Algorithm:
A search is the
most commonly known form of
cost to reach the node n best-first search. It uses heuristic function
from the start state h(n), and
first search, by which it solve g(n). It has combined features of UCS
and greedy best-
the problem efficiently. A* search
through the search space algorithm
using the heuristic function. This search algorithm
finds the shortest path
and provides optimal result faster.
A* algorithm is similar to UCS expands less search tree
of gn). except that it uses g(n) +h(n) instead

In A search
algorithm, we use search heuristic as well as the cost to reach the node. Hence
combine both costs as we can
following, and this sum is called as a fitness number.
fln) = gln)+ hn)

Estimated cost
of the cheapest Cost to reach Cost to reach
solution. node nfrom fromnode n to
start state goalnode

At each point in the search


space, only those node is
the algorithm terminates when expanded which
the goal node is found. have the lowest value of
f(n), and
Algorithm of A search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is


empty or not, if the list is empty then return failure and
stops.
Step 3: Select the node from the OPEN list
if node n is which has the smallest value of evaluation function
goal node then return success and (g+h)
stop, otherwise
Step 4:
Expand node n and generate all of its
successorn', check whether n' is successors, and put n into the
already in the OPEN closed list. For each
funtion for n' and place into or CLOSED list, if not then
Open list. compute evaluation
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back poir
which reflects the lowest
g(n') value.
Step 6: Return to Step 2.

Advantages
A search
o
algorithm is the best algorithm than other search algorithms.
o A search
algorithm is optimal and complete.
o This algorithm can solve
very complex problems

Disadvantages
o It does not always produce the shortest path as it mostly based on heuristics and approximation.
o A search algorithm has some
complexity issues
o The main drawback of A* is
memory requirement as it keeps all generated nodes in the memory, so it is
not practical for various
large-scale problems.
Example
In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is given
in the below table will calculate the f(n) of each state
so we
using the formula f(n)= g(n) + h{n), where gln) is
the cost to reach any node from start state.

Here we will use OPEN and CLOSED list

State h(n)
S

10
D

Solution:
o

Initialization: {5, 5))

Iteration1:( - A, 4), (S--6, 10))


Iteration2: (S-> A-->C, 4), GAB, 7). (S--6, 10)
Iteration3:(S-- A-->C---6, 6), (S-- A-->C-D, 11), (S--> A->E, 7), (S--G, 10))

teration 4 illgive the final result, as S--A--C-G it provides the optimal path with cos6

Points to remember

A algorithm returns the path which occured first, and it does not search for all renaining paths.

The efficiency of A" algorithm depends on the quality of heuristic.

="" li="
A algorithrm expands all nodes which satisty the condition fn)e
Complete: A* algorithm is complete as long as:

Branching factor is finite.


Cost at every action is fiued

Optimal: A search algorithn is optimal if it follows below twoconditions

Admissible: the first condition requires for optimality is that hin) should be an admissble heuristc for
A tree search. An admisible heuristic is optimistic in nature
Consistency Second required condtion is consistency for only A* graph-search

f t e r i s t c funct.on s adm ssibie teri ree seareh wil al ways fne the leat co pas
Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and the
number of nodes expanded is exponential to the depth of solution d. So the time complexity is
O(bAd), where b is the branching factor
Space Complexity: The space complexity of A* search algorithm is O(b^d)

Local Search Algorithms


The solutions to some kind of
problems don't need complete path. They need only goal state.
It is not the matter which
path was chosen to reach the goal.
Example: 8 queens problem

Local search algorithms are suitable to


solve this kind of problems.
Local search algorithms
concentrate only on the
current state.
Examples for Local search algorithms
Hill-climbing
Simulated Annealing
Local Beam Search
Genetic Algorithm

Hill Climbing Algorithm in Artificial Intelligence


o Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the
terminates when it reaches problem. It
a peak value where
neighbor has a higher value.
no
o Hill climbing algorithm is a
technique which is used for optimizing the mathematical
the widely discussed problems. One of
examples of Hill climbing algorithm is
need minimize the distance traveled
to Traveling-salesman Problem in which we
by the salesman.
oIt is also called
greedy local search as it only looks to its good
immediate neighbor state and not
beyond that.
oA node of hill climbing algorithm has two components which are state'and value.
o Hill Climbing mostly used when a good
is
heuristic is available.
In this
o
algorithm, we don't need to maintain and handle
the search tree
single current state.
or graph as it only keeps a

Features of Hill Climbing:


Following are some main features of Hill
Climbing Algorithm
oGenerate and Test variant: Hill
Climbing is the variant of Generate and Test method. The
Test method produce feedback Generate and
which helps to decide which direction to move in
the search
space
o
Greedy approach:
No
Hill-climbing algorithm search moves in the direction which
o
backtracking: It does not backtrack the search optimizes the cost.
space, as it does not remember the previous states.
State-space Diagram for Hill Climbing:
he
state-space landscape
is a
graphical representation
showing graph between various states
a of the hill-climbing algorithm which is
of algorithm and
Objective function/Cost.
On Y-axis we have taken the function
which
can be an
objective function or cost function, and state-
Space the x-axis. If the
on
function on Y-axis is cost then, the
minimum and local minimum.
If the function of Y-axis is
goal of search is to find the global
search is to find the
global maximum Objective function, then the goal of the
and local maximum.

Objective function
Global maximun

shoulder
Local maximum
"flat" local maximumn

Current State space


state

Different regions in the state space


landscape:
Local Maximum: Local maximum is a state which is better than its
another state which is higher than neighbor states, but there is also
it
Global Maximum: Global maximum is the best
highest value of objective function.
possible state of state space
landscape. It has the

Current state: It is state in


a a
landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the
have the same value.
landscape where all the neighbor states of current states

Shoulder: It is a
plateau region which has an uphill edge
20

Types of Hill Climbing Algorithm:


1)First Choice Hill-Climbing
2) Random restart Hill-Climbing

First Choice Hill-Climbing


First Choice Hill-Climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which optimizes current
cost and set it as a current state. It only checks it's one successor state, and if it finds better than the
current state, then move else be in the same state. This algorithm has the following features:

o Less time
consuming
oLess optimal solution and the solution is not guaranteed
Algorithm for First-Choice Hill Climbing:
o
Step 1: Evaluate the initial state, if it is
goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.

Step 4: Check new state:


a. If it is goal state, then return and
success quit.
b. Else if it is better than the current state then
assign new state as a current state.
c. Else if not better than the current state, then return to
step2.
Step 5: Exit.

This is a good strategy when state has many


a thousands of successors
It has disadvantage that it may stuck in local maximum/

2. Random restart Hill-Climbing


It overcomes the problem of previous one
(stuck in local maximum).
It conducts a series of hill-climbing searches from
randomly generated initial states until a goal
is found.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higher than the local maximum.
21
Solution:
Create a
Backtracking technique can be
list of the a solution of the
local maximum in state
other paths as well. promising path so that the algorithm can backtrack the search
space landscape
space and explore

Local maximum

2. Plateau: A plateau is the flat area of the


search space in which all the neighbor states of the current
state contains the same value, because of this
algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or


very little steps while searching, to solve
the problem. Randomly select a state which is far from the current state so it is
away possible that the
algorithm could find non-plateau region.

Plateau/Flat maximum

H+

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we can improve
this problem.
2

Ridge

MM
Simulated Annealing
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete
because it can get stuck on a local maximum.

In contrast, a purely random walk, that is, moving to a successor chosen uniformly at random from the set of

successors is complete but extremely inefficient.

Simulated annealing is the algorithm which combines hill-climbing with a random walk. It gives both efficiency
and completeness

In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling

gradually, so this allows the metal to reach a low-energy crystalline state.


The same process is used in simulated annealing in which the algorithm picks a random move, instead of picking
the best move. If the random move improves the state, then it follows the same path.

Otherwise, the algorithm follows the path which has a probability of less than 1.

The probability decreases exponentially with the "badness of the move"(delta E)

The probability also decreases as the temperature T goes down. It means that "bad moves" are more likely to

be allowed at the start when T is high, and they become more unlikely as T decreases.

If the schedule lowers T slowly enough, the algorithm will find a global optimum
23

Algorithm-Simulated Annealing
function StMULATED ANNEALING
lnpat prolen, a problem returns asolution state
achall a mapping from
enipranun
rument MAKENoDE TAL STATE
foc Tto do
Tachedult
Othen retur en
AFna:VALUE uent,VALL
KAE0then ere
thecumret-t nly wih peobilg

Local Beam Search Algorithm


'k' states rather than one state.
The local beam search algorithm keeps track of
states.
It begins with 'k' randomly generated
states are generated.
At each step, all the successors of all k'
is goal, the algorithm halts.
If any one
from the complete list and repeats.
Otherwise, it selects the 'k' best
successors

Random restartalgorithm Vs Local beamsearch algorithm


restart search, each search process runs independently of others.
In random
local beam search, useful information passed among
is parallel search threads.
In a Come over here, the grass
best s u c c e s s o r s say to others
In effect, the states that generate
greener.
its resources to where the most
unfruitful searches and moves
So, the algorithm quickly stops
ss is being made.
prog
Genetic Algorithm
which successor states are generated by
local search algorithm in
A genetic algorithm is a

states rather than by modifying a single state.


combining two parent states, called population.
with a set of k randomly generated
Similar to beam searches, GA begin
s

Each state is represented


as a string over finite alphabet.
function called 'fitness' function.
Each state is represented by an objective
Exmaple: 8-queens (the number of non attacking
higher values for better states.
A fitness' should return
pairs of queens)

Production of offspringstates
of child states are determined by their "fitness".
states which are participated in production
The parent
A
crossover point is chosen randomly from the positions in the strings.
ne
offsprings themselves are created by crossing over the parent strings at the crossover point.

Mutation is done randomly in offspring states.

2474855224 31832752411 32748552-3274852


32752411 2 29% 24748552 24752411
24415124 20 26%32752411
24752411
32548213 14
32752124 3252124
11

24415124 24415411 244154 1


t t Popalstaon Fitness Function cior
(d)
(c)
Crossover
Mutation

Searching with Nondeterministic Actions


When the environment is
fully observable and deterministic, the agent know what the effects of each
Percepts provide no useful information. action are.

When the environment is


has actually occurred.
nondeterministic, percepts tell the agent which of the
possible outcomes of its actions
So the solution to a nondeterministic
problem is not a
sequence but a
contingency plan (plan B)

The 8 possible states of vacuum world

E
Nondeterministic erratic Vacuum cleaner
I n the erratic vacuum
world, the suck action works as follows:
When applied to a dirty square, the action cleans the
square and sometimes cleans
square up dirt in the
adjacent
When applied to a clean
square, the action deposits the dirt.
AND-OR SEARCH TREES

In a
deterministic environment, the
nodes are called OR nodes. branching is done with the own choices of
agent in each state. Inese
In a
nondeterministic
action.These nodes areenvironment,
called AND
the branching is also done
the choice of environment for each
outcome or
nodes.

llustration of AND-OR search tree

You might also like