KEMBAR78
Unit II Search | PDF | Combinatorics | Theoretical Computer Science
0% found this document useful (0 votes)
13 views37 pages

Unit II Search

The document discusses various search algorithms in Artificial Intelligence, including Random Search, Depth-First Search (DFS), Breadth-First Search (BFS), and heuristic search methods. It explains the features, advantages, and disadvantages of each algorithm, along with their applications and basic operational steps. Additionally, it provides pseudo-code and examples to illustrate how these algorithms work in practice.

Uploaded by

charuumca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views37 pages

Unit II Search

The document discusses various search algorithms in Artificial Intelligence, including Random Search, Depth-First Search (DFS), Breadth-First Search (BFS), and heuristic search methods. It explains the features, advantages, and disadvantages of each algorithm, along with their applications and basic operational steps. Additionally, it provides pseudo-code and examples to illustrate how these algorithms work in practice.

Uploaded by

charuumca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Unit II

Search Algorithms
2.1 Random search
Random Search is a simple, uninformed search algorithm used in Artificial
Intelligence (AI) and optimization problems. It explores the solution space by
generating solutions randomly and checking their fitness without using any
domain-specific knowledge or heuristics.
Features:
 Uninformed search – no prior knowledge is used about the problem.
 Stochastic nature – solutions are generated at random.
 Applicable to – optimization problems where the search space is too
large or complex for exhaustive search.
Random Search Works:
1. Initialize: Define the search space and number of iterations.
2. Generate Candidate: Randomly generate a solution from the search
space.
3. Evaluate: Measure the quality of the solution using an objective (fitness)
function.
4. Store Best: If it is better than the previous best, store it.
5. Repeat: Continue for a fixed number of iterations or until a stopping
condition is met.
Pseudo-code:
Random_Search(objective_function, search_space, iterations):
best_solution = None
best_score = ∞ (or -∞ depending on problem)

for i in 1 to iterations:
candidate = generate_random_solution(search_space)
score = objective_function(candidate)

if score better than best_score:


best_solution = candidate
best_score = score

return best_solution
Example:
Suppose we want to find the minimum value of a function:
f(x) = x² for x in the range [-10, 10]
1. Randomly choose values of x in that range.
2. Calculate f(x) for each value.
3. Keep track of the x that gives the smallest f(x).
Advantages:
 Very simple and easy to implement.
 Doesn’t require gradient or problem-specific knowledge.
 Can escape local optima due to randomness.
Disadvantages:
 Inefficient for large or high-dimensional problems.
 No guarantee of finding the optimal solution.
 Slow convergence compared to informed search methods.
Applications in AI:
 Neural network hyperparameter tuning.
 Game strategy search.
 AI planning and decision-making problems.
2.2 Search with closed and open list
In Artificial Intelligence (AI), particularly in informed search algorithms (like
A*, Dijkstra's, etc.), the concepts of open list and closed list are used to keep
track of which nodes to explore and which nodes have already been explored.
Open List (Frontier):
 A list (or priority queue) of nodes that have been discovered but not yet
explored.
 These are candidates for expansion.
 At each step, the algorithm picks the best node (based on some cost
function) from this list to expand next.
Closed List (Explored Set):
 A list of nodes that have already been explored.
 Once a node is expanded (i.e., all of its neighbors are considered), it is
moved from the open list to the closed list.
 Helps to avoid re-exploring nodes, saving time and preventing loops.
How It Works (Basic Steps):
1. Initialize the open list with the start node.
2. The closed list is initially empty.
3. While the open list is not empty:
o Choose the best node from the open list (based on cost or
heuristic).
o Move it to the closed list.
o Expand it by finding its neighbors.
o For each neighbor:
 If it’s in the closed list, skip it.
 If it’s not in the open list, add it.
 If it’s already in the open list with a higher cost, update it.
Example (A Search):*
Let’s say we're finding a path from Start to Goal:
1. Open list = {Start}
Closed list = {}
2. Expand Start, add its neighbors (A, B) to the open list
Open list = {A, B}
Closed list = {Start}
3. Expand the lowest-cost node (say A), add its neighbors
Open list = {B, C, D}
Closed list = {Start, A}
4. Repeat until Goal is found or open list is empty.
Visualization:
Step Open List Closed List
1 Start –
2 A, B Start
3 B, C, D Start, A
... ... ...
Advantages:
 Avoids re-exploring the same nodes.
 Efficient and prevents infinite loops.
 Useful in graph-based search problems.
Used In:
 A* Algorithm
 Dijkstra’s Algorithm
 Best-First Search
 Uniform Cost Search
2.3 Depth first Search
Depth-First Search (DFS) is a graph or tree traversal algorithm that explores as
far as possible along each branch before backtracking. It uses a stack (or
recursion) to remember the next node to visit. DFS is used in tasks like solving
mazes, scheduling, and detecting cycles in graphs.

Example
Depth-First Search (DFS) is an algorithm used to explore all the nodes of a
graph or tree by going as deep as possible along each branch before
backtracking.
Steps to Perform DFS:
1. Create a Stack
Make a stack with a size equal to the number of vertices in the graph.
2. Start from Any Node
Choose any node to start (e.g., A). Mark it as visited and push it onto the
stack.
3. Visit Neighbors
From the top node in the stack, visit one of its unvisited neighbors. Mark
it visited and push it onto the stack.
4. Repeat
Keep visiting unvisited neighbors and pushing them onto the stack.
5. Backtrack if Needed
If a node has no unvisited neighbors, pop it from the stack (backtrack to
the previous node).
6. Continue Until Done
Repeat steps 3–5 until the stack is empty.
7. Create Spanning Tree
When done, remove unused edges from the graph. The remaining
structure is the DFS spanning tree (a tree with all the visited nodes and
no loops).
Step 1: Mark vertex A as a visited source node by selecting it as a source node.
 You should push vertex A to the top of the stack.

Step 2: Any nearby unvisited vertex of vertex A, say B, should be visited.


 You should push vertex B to the top of the stack.

Step 3: From vertex C and D, visit any adjacent unvisited vertices of vertex B.
Imagine you have chosen vertex C, and you want to make C a visited vertex.
 Vertex C is pushed to the top of the stack.

Step 4: You can visit any nearby unvisited vertices of vertex C, you need to
select vertex D and designate it as a visited vertex.
 Vertex D is pushed to the top of the stack.

Step 5: Vertex E is the lone unvisited adjacent vertex of vertex D, thus marking
it as visited.
 Vertex E should be pushed to the top of the stack.

Step 6: Vertex E's nearby vertices, namely vertex C and D have been visited,
pop vertex E from the stack.

Step 7: Now that all of vertex D's nearby vertices, namely vertex B and C, have
been visited, pop vertex D from the stack.
Step 8: Similarly, vertex C's adjacent vertices have already been visited;
therefore, pop it from the stack.

Step 9: There is no more unvisited adjacent vertex of b, thus pop it from the
stack.

Step 10: All of the nearby vertices of Vertex A, B, and C, have already been
visited, so pop vertex A from the stack as well.

Application of Depth-First Search Algorithm


1. Cycle Detection in a Graph:
DFS can detect cycles. If DFS finds a back edge (an edge that connects to
an already visited ancestor), the graph has a cycle.
2. Topological Sorting:
DFS helps order tasks based on dependencies. It is useful in scheduling
jobs, compiling code, and evaluating formulas.
3. Checking if a Graph is Bipartite:
DFS can help color each node with two colors. If two connected nodes
get the same color, the graph is not bipartite.
4. Finding Strongly Connected Components:
DFS is used in algorithms like Kosaraju’s to find groups of nodes where
every node is reachable from every other node.
5. Solving Mazes and Puzzles:
DFS explores one path deeply before trying others, making it useful for
solving mazes or puzzles that have one solution.
6. Path Finding:
DFS can be used to find a path from one node to another by keeping
track of the path using a stack.
7. Spanning Tree Construction:
DFS can build a spanning tree (a tree that connects all nodes without
cycles) from an unweighted graph.
fromscipy.sparseimportcsr_matrix
fromscipy.sparse.csgraphimportdepth_first_order
graph=csr_matrix([[0,1,1,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,1,1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,1,1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1,1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,1,1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,1],
[0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0]])
print(graph)
order=depth_first_order(graph,0,directed=True)[0]
print("After DFS",order)

OUTPUT :
(0, 1) 1
(0, 2) 1
(0, 3) 1
(1, 4) 1
(1, 5) 1
(2, 6) 1
(2, 7) 1
(3, 8) 1
(3, 9) 1
(4, 10) 1
(4, 11) 1
(5, 12) 1
After DFS [ 0 1 4 10 11 5 12 2 6 7 3 8 9]
2.4 Breadth first search
Breadth-First Search (BFS) is a graph or tree traversal algorithm that explores
all the nearest (adjacent) nodes of a starting point before moving to the next
level of nodes. It uses a queue to keep track of nodes to visit next. BFS is useful
for finding the shortest path in unweighted graphs and exploring data level by
level.
The architecture of BFS algorithm
Simple Steps of BFS (Breadth-First Search):
1. Start at Any Node:
Choose any node to start. Mark it as visited and add it to a queue.
2. Visit Neighbors:
Visit all unvisited neighbors of the current node, mark them as visited,
and add them to the queue.
3. Repeat Using FIFO:
Remove the front node from the queue and repeat the process for its
neighbors.
Continue until the queue is empty.
BFS uses a First-In-First-Out (FIFO) queue to visit nodes level by level.
Why We Need BFS Algorithm:
We use the Breadth-First Search (BFS) algorithm because:
 It helps find the shortest path in unweighted graphs.
 It visits nodes level by level, using the fewest steps.
 It has a simple and reliable structure.
 It gives accurate results for many types of problems.
 It avoids infinite loops by using a visited list.
Step 1
Each vertex or node in the graph is known. For instance, you can mark the
node as V
Step 2

In case the vertex V is not accessed then add the vertex V into the BFS Queue
Step 3

Start the BFS search, and after completion, Mark vertex V as visited.
Step 4
The BFS queue is still not empty, hence remove the vertex V of the graph from
the queue.
Step 5

Retrieve all the remaining vertices on the graph that are adjacent to the vertex
V
Step 6

For each adjacent vertex let’s say V1, in case it is not visited yet then add V1 to
the BFS queue
Step7
BFS will visit V1 and mark it as visited and delete it from the queue.

Example BFS Algorithm


Step 1)

You have a graph of seven numbers ranging from 0 – 6.


Step 2)
0 or zero has been marked as a root node.
Step 3)

0 is visited, marked, and inserted into the queue data structure.


Step 4)
Remaining 0 adjacent and unvisited nodes are visited, marked, and inserted
into the queue.
Step 5)
Traversing iterations are repeated until all nodes are visited.
Rules of BFS Algorithm:
 BFS uses a queue (FIFO) to store nodes.
 Start from any node marked as the root.
 Visit all adjacent unvisited nodes, mark them as visited, and add to the
queue.
 If no adjacent node is found, remove the front node from the queue.
 Repeat the process until all nodes are visited.
 BFS avoids infinite loops by marking visited nodes.
 Each visited node is marked as completed after it is processed.
Here are some common real-life uses of the BFS algorithm:
 Unweighted Graphs: Finds the shortest path and minimum spanning tree quickly and
accurately.
 P2P Networks: Helps find nearby nodes to locate data faster.
 Web Crawlers: Visits web pages level by level to build indexes for search engines.
 Navigation Systems: Finds all nearby locations from a starting point.
 Network Broadcasting: Sends data packets to all reachable nodes efficiently.

from collections import deque

def bfs(graph, start):


visited = set() # To keep track of visited nodes
queue = deque([start]) # Initialize queue with the start node
visited.add(start)

while queue:
node = queue.popleft() # Get the front node
print(node, end=' ') # Process the node (print it)

# Visit all unvisited neighbors


for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)

# Define the graph as an adjacency list


graph = {
'A': ['B', 'C'],
'B': ['D'],
'C': ['E', 'F'],
'D': [],
'E': [],
'F': []
}

# Run BFS starting from node 'A'


bfs(graph, 'A')
Output
ABCDEF
2.5 Heuristic search
A heuristic is a technique or rule used to solve problems faster or find an
approximate solution when exact methods are too slow or impossible. It acts
like a shortcut, helping make decisions quickly by estimating which option is
better, but it doesn’t always guarantee the best answer.
A heuristic function (written as h(n)) estimates how close you are to the goal in
search problems. It gives a positive value that helps decide the next step to
take. The heuristic should never overestimate the true cost to reach the goal
— meaning h(n) ≤ actual cost — to be considered admissible.
In short:
Heuristic = smart guess to solve problems faster, trading exactness for speed.

Heuristics help solve hard problems faster by giving approximate answers that
are good enough.
Applications of Heuristic Algorithms:
Heuristic algorithms are used in many fields like transportation, logistics, and
finance. For example, they help find the best delivery routes for packages or
reduce risks when investing money.
Examples of Heuristics:
1. Your father says the distance from Esplanade to Tollygunge is 15 km, but
the actual distance may change depending on traffic or route — so you
use this as a rough estimate.
2. At a grocery store, you pick the breakfast cereal brand you like best from
before instead of checking all options again.
3. If it looks cloudy, you take an umbrella just in case it rains later.
How heuristic works on Tic-Tac-Toe:
In Tic-Tac-Toe, the game is played on a 3x3 grid where two players take turns
placing their symbols (‘X’ or ‘O’). A heuristic helps the player decide the best
move by quickly estimating which moves are more likely to lead to winning,
blocking the opponent, or creating chances to win later — without checking
every possible move deeply.

Simple Definition – Heuristic in Tic-Tac-Toe:


A simple heuristic for Tic-Tac-Toe means using smart rules to pick the best
move instead of checking every possible move.
Heuristic Values of Positions:
 Center = 4 (Best position)
 Corner = 3
 Side = 2
Five Rule-Based Strategies:
1. Take the win – If you can win in one move, do it.
2. Block the opponent – If the opponent can win in their next move, stop
them.
3. Create a fork – Make a move that gives you two ways to win.
4. Block a fork – Prevent your opponent from creating a fork.
5. Maximize winning chances – Play in a spot that gives you the most ways
to win.
These rules help the player make smart decisions and win the game more
often.
Heuristic Search Techniques in Artificial Intelligence:
Heuristic search techniques are smart ways to find solutions in AI by using rules
or hints (heuristics) instead of searching everything.
They are divided into two types:
a. Direct Heuristic Search (Uninformed or Blind Search):
 No extra information is used.
 Searches all possible paths blindly.
 Slower and uses more memory.
 Examples: Breadth First Search (BFS), Depth First Search (DFS)
b. Weak Heuristic Search (Informed or Heuristic Search):
 Uses extra knowledge to make smart choices.
 Faster and more efficient.
 Each node uses a heuristic function to guide the search.
 Examples: Best First Search, A* Search, Hill Climbing
These techniques help AI solve complex problems quickly and efficiently.
2.6 Best first search
Greedy Best-First Search is a smart search method used in AI that tries to reach
the goal as quickly as possible. It always picks the path that looks best at the
moment using a rule called a heuristic.
 It uses a heuristic function h(n) to guess how close a node is to the goal.
 It chooses the closest-looking node (smallest h(n)) first.
 It does not guarantee the shortest path, just the fastest-looking one.
 It uses a priority queue to manage which node to explore next.
 Formula: f(n) = h(n)
It’s fast but not always the most accurate.
How Greedy Best-First Search Works:
 Greedy Best-First Search chooses the path that looks closest to the goal
using a heuristic function.
 It keeps checking and picking the most promising node based on the
lowest estimated cost (h(n)) to the goal.
 It does not care about the total path cost, only how near the next step
seems to the goal.
 It keeps repeating this process until it reaches the goal.
Steps
Simple Definition – Steps of Greedy Best-First Search:
1. Add the starting node to the OPEN list.
2. If the OPEN list is empty, stop – no solution found.
3. Remove the node with the lowest h(n) (closest to goal) from OPEN and
move it to the CLOSED list.
4. Expand this node – find all possible next steps (successors).
5. If any successor is the goal, stop – goal found.
6. For each successor, check if it’s already in OPEN or CLOSED. If not, add it
to OPEN.
7. Repeat from Step 2 until goal is found or OPEN is empty.
Example of Greedy Best-First Search:
In this example, we solve a search problem using Greedy Best-First Search,
which uses the evaluation function:
f(n) = h(n) (estimated cost from current node to goal).
We use two lists:
 OPEN list – stores nodes to explore.
 CLOSED list – stores nodes already explored.
How it works:
1. Start with the initial node in the OPEN list.
2. At each step, pick the node with the lowest h(n) value from OPEN.
3. Move it to CLOSED, expand it (generate next possible nodes).
4. Add new nodes (not in OPEN or CLOSED) to the OPEN list.
5. Repeat until the goal node is reached.
This way, the algorithm chooses the most promising node first, based on the
heuristic value.

Simple Definition – Node Expansion in Greedy Best-First Search:


We are expanding nodes step by step using Greedy Best-First Search, where
each step chooses the node with the lowest heuristic value (h(n)) from the
OPEN list.
Steps:
 Start at node S
 Add neighbors A, B to the OPEN list
 Move S to the CLOSED list

Iteration Details:
Initialization:
 OPEN: [A, B]
 CLOSED: [S]
Iteration 1:
 Select B (lowest h(n))
 Add its children E, F
 OPEN: [A, E, F]
 CLOSED: [S, B]
Iteration 2:
 Select F, expand to G
 OPEN: [A, E, G]
 CLOSED: [S, B, F]
Iteration 3:
 Select G (goal reached)
 OPEN: [A, E, I] (if I was added)
 CLOSED: [S, B, F, G]

Final Solution Path:


S→B→F→G
This is the path found using Greedy Best-First Search, by always expanding the
most promising node first.
Advantages of Greedy Best-First Search:
1. Easy to Use: The algorithm is simple and easy to implement.
2. Fast: It works quickly, especially when speed is important.
3. Low Memory Use: Needs only a small amount of memory.
4. Flexible: Can be adapted to different types of problems.
5. Efficient with Good Heuristics: If the heuristic is good, it can find
solutions quickly, even in large problems.
Disadvantages of Greedy Best-First Search:
1. May Not Find Best Result: It doesn’t always give the best or correct solution.
2. Gets Stuck: Can get stuck in local best paths and miss the overall best one.
3. Needs Heuristic:Requires a good heuristic to work well, which can be hard to
design.
4. Not Always Complete: Might not find a solution, especially in complex
problems or loops.
Applications of Greedy Best-First Search:
1. Pathfinding: Helps find the shortest way in maps or games.
2. Robotics & Navigation: Guides robots or systems to reach a target
location.
3. Machine Learning: Finds best choices while training models.
4. Optimization: Improves settings or results in a system.
5. Game AI: Picks the best move in games.
6. Natural Language Processing: Helps in tasks like translation or speech
recognition.
7. Image Processing: Breaks images into useful parts for analysis.
2.7 A* algorithm
A* algorithm is a smart search method used to find the shortest path from a
start point to a goal. It combines two things:
 g(n): the actual cost to reach a node from the start
 h(n): an estimate of the cost from that node to the goal (heuristic)
It adds these two costs to decide which path to explore next, helping it find the
best route quickly and efficiently. Many games and maps use A* because it
finds the shortest path fast and accurately.

A * Search Algorithm Steps:


1. Start by putting the starting node in the OPEN list.
2. If the OPEN list is empty, stop and say no solution.
3. Pick the node from OPEN with the smallest (g + h) value. If it is the goal,
stop and say success.
4. Otherwise, expand this node: create its child nodes and move the
current node to the CLOSED list.
5. For each child node, if it’s not in OPEN or CLOSED, calculate (g + h) and
add it to OPEN.
6. If the child node is already in OPEN or CLOSED, update it if the new path
is better (lower g cost).
7. Repeat from step 2 until the goal is found or no nodes left.
Example
 Start with node S in the OPEN list with cost 5.
 Iteration 1: Expand S to get nodes A (cost 4) and G (cost 10).
 Iteration 2: Expand A to get nodes C (cost 4), B (cost 7), and keep G (cost
10).
 Iteration 3: Expand C to get nodes G (cost 6) and D (cost 11), keep B
(cost 7), and G (cost 10).
 Iteration 4: The best path found is S → A → C → G with total cost 6. This
is the shortest path.

Advantages of A Algorithm:*
 A* is better than many other search algorithms.
 It always finds the best (optimal) solution if one exists.
 It can solve very complex problems.
Disadvantages of A Algorithm:*
 Sometimes it may not give the shortest path because it depends on
heuristics and approximations.
 It can be complex and slow for very big problems.
 It uses a lot of memory because it stores all the generated nodes, which
can be a problem for large tasks.
Applications of A Algorithm:*
 Used in robots to help them find paths around obstacles.
 Used in video games for characters to find paths.
 Used in route planning and logistics to find the best way to deliver or
travel.
 Used in artificial intelligence for different optimization problems.
2.8 Game Search
Game Playing in AI (Simple Definition):
Game playing in AI focuses on making the best moves in games by using only
the rules, legal moves, and winning conditions. Because there are many
possible moves, simple searches like BFS take too long. So, smarter methods
are used to generate only good moves and explore the best moves first.
The most common method is Minimax search, which is a depth-first search
limited by depth. It helps decide the best move by assuming both players try
their best to win. It’s used in games like chess and tic-tac-toe.
Min-Max Search (Simple Definition):
Min-Max is a search method used in two-player games to find the best move
by looking ahead at possible future moves. It assumes both players try to win
by making the best moves. The search goes down the game tree to a certain
depth (called ply) and evaluates positions using a scoring system (utility
values).
 MAX player tries to maximize the score (choose the highest value).
 MIN player tries to minimize the score (choose the lowest value).
The algorithm explores moves, assigns scores to the end positions, and then
works back up the tree, choosing moves that maximize or minimize the score
depending on whose turn it is.
Two main parts help Min-Max:
1. MOVEGEN — generates all possible moves for a player.
2. STATIC — evaluates how good a position is for a player.
Min-Max repeats this until it reaches the set depth or a game-ending
condition. Then it chooses the best move for the current player.
Algorithm: MINMAX (position, depth, player)
1. If LAST PLY (position, depth)
Then RETURN VALUE =
STATIC (position, player)
PATH = nil.
2. Else, generate one more ply of the tree by calling the function MOVE_GEN
(position, player) and set SUCCESORS to the list it returns.
3. If SUCESSORS is empty,
THEN no moves to be made
RETURN the same structure that would have been returned if LAST_PLY had
returned
TRUE.
4. If SUCCESORS is not empty,
THEN examine each element in turn and keep track of the best one.
5. After examining all the nodes,
RETURN VALUE = BEST- SCORE
PATH = BEST- PATH
When the initial call to MIN MAX returns, the best move from CURRENT is the
first
element in the PATH.
Alpha-Beta (α-β) Pruning (Simple Definition):
Alpha-Beta pruning is a technique used to make the Min-Max search faster by
cutting off branches in the game tree that won’t affect the final decision. It
reduces the number of states the algorithm needs to examine without
changing the result.
 Alpha (α) is the best value that the maximizing player (MAX) can
guarantee so far.
 Beta (β) is the best value that the minimizing player (MIN) can guarantee
so far.
During the search:
 If the MIN player finds a move with value less than or equal to α, it stops
exploring further because MAX won’t allow that path.
 If the MAX player finds a move with value greater than or equal to β, it
stops exploring further because MIN won’t allow that path.
This pruning helps ignore parts of the tree that won’t influence the outcome,
making the search faster while still finding the best move.

Here’s a simple definition based on your explanation:


At a MIN ply, the algorithm chooses the smallest value from the child nodes.
For example, if the child nodes have values -4, 5, and 0, the minimum is -4. This
value moves up the tree to influence the parent decision.
If another node (like E) has a value (e.g., 8) that is worse than the current best
choice for MIN, the algorithm stops exploring that node and its entire subtree
— this is called pruning.
Alpha-beta pruning keeps track of two values, α (best for MAX) and β (best for
MIN), and cuts off branches that can’t improve these values. This speeds up
the search by skipping parts of the tree that won’t affect the final result.
Because of this pruning, alpha-beta search examines roughly the square root
of the nodes that a normal Min-Max search would explore, making it much
more efficient.
Here’s a simple definition of the Water Jug Problem based on your
explanation:
Water Jug Problem:
 You have two jugs: one holds 4 gallons (jug four) and the other holds 3
gallons (jug three).
 The goal is to get exactly 2 gallons in the 4-gallon jug.
 The state of the problem is represented as (amount in jug four, amount
in jug three).
 The start state is (0, 0) — both jugs empty.
 The goal state is (2, n), where n can be any amount from 0 to 3 in jug
three.
Key actions (rules) you can do:
1. Fill jug four completely.
2. Fill jug three completely.
3. Empty jug four.
4. Empty jug three.
5. Pour water from jug three into jug four until jug four is full or jug three is
empty.
6. Pour water from jug four into jug three until jug three is full or jug four is
empty.
Example solution steps:
 Start at (0,0).
 Fill jug three → (0,3).
 Pour jug three into jug four → (3,0).
 Fill jug three again → (3,3).
 Pour jug three into jug four to fill it → (4,2).
 Empty jug four → (0,2).
 Pour jug two into jug four → (2,0).
You have exactly 2 gallons in jug four, which is the goal!
Chess Problem
The Chess Problem is a simplified version of a chess game where we focus on
finding a sequence of legal moves, usually for a knight, from a starting position
to a goal position (winning state).
 Initial State: The starting arrangement of the chessboard.
 Final State: Any configuration where a player wins or achieves a specific
goal.
 State Space: All possible board configurations resulting from legal
moves.
 Moves: In the reduced version (like a 3x3 board), each move (e.g.,
move(1,8)) is predefined and represents a legal knight move.
 Knowledge Base: Contains facts (moves) and uses logic (like predicate
calculus) to find valid moves.
 Example: From square 2, the knight can legally move to squares 7 or 9.
So, move(2,7) and move(2,9) are valid.

Key Points:
 Each move changes the game state.
 A predicate like move(x, y) defines a legal move from square x to y.
 A unification algorithm is used to find valid moves by matching the
current state with known legal moves.
 The problem involves many production rules and searching, making
efficient algorithm implementation essential.

Puzzles(Tiles) Problem
The 8-puzzle is a sliding tile puzzle played on a 3x3 board with 8 numbered tiles
(1 to 8) and 1 blank space. You can move a tile into the blank space if it is next
to it.
Rearrange the tiles from a starting position to a goal position by sliding the tiles
one at a time into the blank space.
 State: Any arrangement of the tiles on the board.
 Moves: Up, Down, Left, or Right – only if the tile is adjacent to the blank
space.
 Initial State: Where the puzzle begins.
 Goal State: The correct, ordered arrangement of tiles (e.g., from 1 to 8
with the blank last).
 F(x) = g(x) + h(x):
o g(x): Number of moves made so far (from start).
o h(x): Estimated moves to reach the goal (heuristic).
o Choose the move with the smallest f(x) value at each step.
Example

Step 1:
From the initial tray, there are two possible moves: moving tile 6 or tile 8 into
the empty space.
 Two new states are formed: let's call them B and C.
 Calculate f(x) (total cost):
o f(B) = 6
o f(C) = 4
 Since C has the lower f(x) value (4), choose C as the next current state.
Step 2:
From tray C, the empty space can be filled by tile 5, 3, or 6 → 3 possible new states.
 Calculate f(x) for each.
 Let’s say State F has the minimum f(x) = 4.
 Choose State F as the next current state.

Step 3:
From tray F, 4 tiles can move into the empty space: 2, 4, 5, 8 → 4 possible new states.
 Calculate f(x) for each.
 Let’s say Tray I has the smallest f(x).
 Choose Tray I as the current state.
Step 4:
From tray I, 3 moves are possible: 7, 8, or 6 can move into the empty space.
 Apply the same logic: compute f(x) for each.
 After a few more steps, goal state is reached.

You might also like