KEMBAR78
Searching in AI | PDF | Applied Mathematics | Theoretical Computer Science
0% found this document useful (0 votes)
67 views15 pages

Searching in AI

The document discusses various searching techniques in Artificial Intelligence (AI), categorizing them into uninformed and informed search strategies. It covers methods such as Breadth-First Search, Depth-First Search, and A* algorithm, detailing their advantages, disadvantages, and applications in problem-solving. Additionally, it addresses Constraint Satisfaction Problems and adversarial search strategies used in game playing.

Uploaded by

gesir71201
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views15 pages

Searching in AI

The document discusses various searching techniques in Artificial Intelligence (AI), categorizing them into uninformed and informed search strategies. It covers methods such as Breadth-First Search, Depth-First Search, and A* algorithm, detailing their advantages, disadvantages, and applications in problem-solving. Additionally, it addresses Constraint Satisfaction Problems and adversarial search strategies used in game playing.

Uploaded by

gesir71201
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Artificial Intelligence - Searching Techniques

UNIT 2
1. Searching for Solutions
In Artificial Intelligence (AI), searching is the process of navigating through a problem space to
find a solution. A problem can be represented as:

Initial state: The starting point of the problem.


Goal state: The desired outcome.
Operators: Actions that change one state into another.
Path cost: The cost associated with reaching a goal.
Solution: A sequence of actions leading to the goal state.

AI search strategies are divided into uninformed (blind) search and informed (heuristic)
search.

2. Uninformed Search Strategies


Uninformed search strategies do not have any additional information about the problem
apart from its definition.

2.1 Breadth-First Search (BFS)

Explores nodes level by level, moving from one depth level to the next.
Uses a queue (FIFO - First In, First Out) to keep track of nodes.
Guarantees finding the shortest path if all actions have the same cost.

Advantages:

Complete (always finds a solution if one exists).


Optimal for uniform-cost problems.

Disadvantages:

High memory usage as it stores all nodes at the current level.


Slow for large problem spaces.

2.2 Depth-First Search (DFS)

Explores as far as possible along each branch before backtracking.


Uses a stack (LIFO - Last In, First Out) to store nodes.
Advantages:

Requires less memory than BFS.


Works well when solutions are deep in the tree.

Disadvantages:

Not guaranteed to find the shortest path.


May get stuck in infinite loops (use Depth-Limited Search to avoid this).

2.3 Depth-Limited Search (DLS)

Like DFS but with a depth limit.


Avoids infinite loops by restricting search depth.

2.4 Iterative Deepening Depth-First Search (IDDFS)

Combines DFS and BFS: performs DFS with increasing depth limits.
Finds the shortest path while using less memory.

2.5 Uniform Cost Search (UCS)

Expands the least-cost node first.


Uses a priority queue to store nodes based on path cost.
Guarantees the optimal solution.

3. Informed Search Strategies (Heuristic Search)


These strategies use heuristics (rules of thumb) to guide the search toward the goal more
efficiently.

3.1 Generate-and-Test

Generates a solution and tests whether it is correct.


If incorrect, modifies the solution and tries again.
Simple but inefficient.

3.2 Hill Climbing

Always moves toward the state with the best heuristic value.
Can get stuck in local maxima (bad solutions that seem good in the short term).

3.3 Best-First Search

Expands the most promising node based on a heuristic function.


Uses a priority queue to select nodes.
A* and Greedy Search are examples.
3.4 A Algorithm*

Uses f(n) = g(n) + h(n) where:


g(n) = cost from start to current node.
h(n) = estimated cost from current node to goal.
Finds the optimal path.

3.5 AO Algorithm*

Used in AND-OR graphs.


Useful for solving problems with multiple possible solutions.

4. Constraint Satisfaction Problems (CSPs)


CSPs involve finding values for variables under given constraints.

Example: Sudoku, Timetable Scheduling.


Solved using Backtracking, Forward Checking, Arc Consistency.

5. Problem Reduction
Breaks a complex problem into smaller subproblems.
Example: Divide and Conquer algorithms.

6. Game Playing - Adversarial Search


Adversarial search is used in two-player games where players compete against each other.

6.1 Minimax Algorithm

Assumes both players play optimally.


Maximizing player tries to maximize gain, minimizing player tries to minimize loss.
Example: Chess, Tic-Tac-Toe.

6.2 Alpha-Beta Pruning

Optimizes Minimax by pruning unnecessary nodes.


Reduces search time.

6.3 Problems in Game Playing

Large search spaces make computation expensive.


Time constraints make real-time decision-making difficult.

Conclusion
Searching is fundamental in AI for problem-solving and decision-making. Understanding
these techniques helps in applications like robotics, gaming, and optimization problems.

1. Searching for Solutions

Introduction to Searching

Searching is one of the most important techniques in Artificial Intelligence (AI). AI systems
often need to find a solution by navigating through a search space, which consists of multiple
possible states.

A search problem consists of:

1. Initial State – The starting point of the problem.


2. Goal State – The target or solution we want to reach.
3. Operators – Actions that help move from one state to another.
4. Path Cost – The cost associated with moving between states (time, distance, resources,
etc.).
5. Solution – A sequence of actions that leads from the initial state to the goal state.

Why is Searching Important in AI?

AI uses search techniques in many real-world applications, such as:


✅ Pathfinding – Finding the shortest route in Google Maps.
✅ Game Playing – Chess, Tic-Tac-Toe, and other games use search to decide the best move.
✅ Robotics – Robots use search techniques to plan their movement.
✅ Decision-Making – AI systems use search strategies to evaluate different possible
decisions.

Types of Search Strategies

Search strategies in AI are broadly classified into two categories:

1. Uninformed (Blind) Search – Uses no additional information about the goal. The search
progresses systematically until a solution is found.
2. Informed (Heuristic) Search – Uses additional knowledge (heuristics) to guide the search
towards the goal more efficiently.

2. Uninformed Search Strategies


Uninformed search strategies do not have prior knowledge about the problem’s structure
beyond the given problem definition. These methods systematically explore the search space
until they find a solution.

2.1 Breadth-First Search (BFS)

How it Works:

Explores all neighbor nodes at one level before moving to the next level.
Uses a queue (FIFO - First In, First Out) to store nodes.
Guarantees finding the shortest path in an unweighted graph.

Example:

Imagine a maze where you want to find the shortest route from the entrance to the exit. BFS
will explore all possible paths level by level until it finds the shortest one.

Advantages:

✅ Complete – Always finds a solution if one exists.


✅ Optimal – If all actions have the same cost, BFS finds the shortest path to the goal.
Disadvantages:

❌ High memory usage – BFS stores all nodes at the current level, which can be problematic
for large search spaces.
❌ Time-consuming – If the goal is deep in the tree, BFS will take a long time to reach it.
Complexity Analysis:

Time Complexity: O(b^d) where is the branching factor and is the depth of the solution.
Space Complexity: O(b^d)(due to storing all nodes in memory).

2.2 Depth-First Search (DFS)

How it Works:

Explores as far as possible along each branch before backtracking.


Uses a stack (LIFO - Last In, First Out) to store nodes.
Keeps going deeper in one path before checking other paths.

Example:

Imagine navigating through a file system. DFS will go deep into one folder, explore all
subfolders before returning to check the other folders.
Advantages:

✅ Memory efficient – Uses less memory than BFS (only stores nodes along the current path).
✅ Works well for deep solutions – If the goal is deep in the search tree, DFS may reach it
faster.

Disadvantages:

❌ Not always optimal – DFS may find a longer path instead of the shortest one.
❌ Can get stuck in infinite loops – If cycles exist in the search space, DFS can keep revisiting
the same nodes.

Complexity Analysis:

Time Complexity: O(b^d)


Space Complexity: O(b^d)(storing nodes along the current path).

2.3 Depth-Limited Search (DLS)

A variation of DFS with a depth limit to prevent infinite loops.


Helps in cases where DFS runs infinitely in large or cyclic graphs.
Still may not find the shortest solution if the depth limit is too small.

2.4 Iterative Deepening Depth-First Search (IDDFS)

Repeatedly runs DFS with increasing depth limits until a solution is found.
Combines the space efficiency of DFS with the completeness of BFS.
Useful when the depth of the solution is unknown.

2.5 Uniform Cost Search (UCS)

Expands the least-cost node first.


Uses a priority queue to select the next node based on path cost.
Guarantees the optimal solution if all costs are positive.

3. Informed Search Strategies (Heuristic Search)


Introduction to Informed Search

Unlike uninformed search, which explores the search space blindly, informed search (also
called heuristic search) uses extra information (a heuristic function) to improve efficiency.
A heuristic function estimates the cost from the current state to the goal state, helping AI
choose the most promising paths first.

Advantages of Informed Search:

✅ More efficient – Reduces the number of explored nodes. ✅ Faster than uninformed search
– Guides towards the goal using heuristics. ✅ Finds optimal or near-optimal solutions –
Especially useful for large search spaces. ✅ Can be adapted for different problem types –
Useful in navigation, robotics, and AI planning.

3.1 Generate-and-Test Algorithm

How it Works:

1. Generate a possible solution.


2. Test whether it is a valid solution (goal state).
3. If not, generate another solution and repeat.
4. Stop when a solution is found or no more possibilities exist.

Example:

Trying different possible passwords until you find the correct one.
Testing different configurations in a circuit design until a working one is found.

Advantages:

✅ Simple to implement. ✅ Useful when a small search space exists. ✅ Works for problems
where solutions can be quickly verified.

Disadvantages:

❌ Inefficient for large search spaces. ❌ No guarantee of finding an optimal solution. ❌ May
generate the same solution multiple times, wasting effort.

3.2 Hill Climbing Algorithm

How it Works:

Starts with an initial state and iteratively moves to a neighbor with a better heuristic
value.
Stops when no better neighbor exists.

Types of Hill Climbing:

Simple Hill Climbing – Moves to the first better state found.


Steepest-Ascent Hill Climbing – Considers all neighbors and moves to the best one.
Stochastic Hill Climbing – Selects a neighbor randomly based on a probability.
Example:

Climbing a hill where each step moves you closer to the peak (goal state).
Optimizing a website layout by making incremental design changes.

Advantages:

✅ Requires less memory. ✅ Efficient in some cases. ✅ Works well when heuristic
information is reliable.

Disadvantages:

❌ Gets stuck in local maxima. ❌ May fail in flat or shoulder regions. ❌ Does not always find
the best solution if an optimal path requires backtracking.

3.3 Best-First Search

How it Works:

Uses a priority queue to always expand the most promising node based on a heuristic
function .
The heuristic function estimates how close the node is to the goal.
More sophisticated than simple greedy algorithms as it uses problem-specific knowledge.

Example:

Navigating using Google Maps, where the algorithm selects the shortest predicted route.
Pathfinding in video games where an AI-controlled character takes the best possible
path.

Advantages:

✅ More efficient than BFS and DFS. ✅ Works well for large search spaces. ✅ Uses heuristic
knowledge effectively to guide search.

Disadvantages:

❌ Not always optimal. ❌ Can get stuck in loops if not implemented carefully. ❌
Performance highly depends on the accuracy of the heuristic function.

4. Constraint Satisfaction Problems (CSP)


Introduction to CSP

A Constraint Satisfaction Problem (CSP) is a type of search problem where the goal is to find
a solution that satisfies a set of constraints.
Components of CSP:

1. Variables: The elements that need to be assigned values.


2. Domains: The possible values each variable can take.
3. Constraints: The conditions that restrict the values the variables can take.

Example:

Sudoku: Each row, column, and block must contain unique numbers.
Map Coloring: Neighboring countries must have different colors.
Scheduling Problems: Assigning tasks to workers without overlapping shifts.

Solving CSP:

Backtracking Search: A depth-first search algorithm that assigns values step by step and
backtracks when a conflict occurs.
Forward Checking: Eliminates values from future choices that violate constraints.
Arc Consistency (AC-3 Algorithm): Ensures all constraints are satisfied before making
assignments.
Constraint Propagation: Reduces the search space by enforcing constraints early in the
search process.

Advantages of CSP:

✅ Efficient for structured problems. ✅ Eliminates unnecessary search paths early. ✅ Allows
for systematic and logical problem-solving.

Disadvantages of CSP:

❌ Can be computationally expensive for large problems. ❌ Difficult to apply if constraints


are not well defined. ❌ Some problems may require complex heuristics to guide search
efficiently.

5. A and AO Algorithms**
5.1 A Algorithm*

Introduction:

A* (A-star) is one of the most popular informed search algorithms. It combines Uniform Cost
Search (UCS) and Best-First Search (BFS) using the formula:

where:

= Cost from the start node to node


= Heuristic estimate of cost from to the goal
= Estimated total cost of the best path through

How A Works:*

1. Place the start node in an open list.


2. Select the node with the lowest .
3. Expand the node and update the cost of neighbors.
4. Repeat until the goal node is reached.

Example:

Used in GPS navigation systems to find the shortest and most efficient path.

Advantages:

✅ Guaranteed to find the optimal solution if the heuristic is admissible. ✅ More efficient
than other search algorithms for many problems. ✅ Works well for pathfinding problems.

Disadvantages:

❌ Memory-intensive as it stores all nodes in memory. ❌ Performance highly depends on the


heuristic function.

5.2 AO* Algorithm**

Introduction:

The A* and AO* Algorithms** O* (And-Or Star) algorithm is used for problems where solutions
form an AND-OR graph rather than a single linear path.

How AO* Works:**

1. Uses backtracking and heuristic-based expansion like A*.


2. Expands the most promising path first.
3. Works well for problems with multiple possible solutions that depend on conditional
choices.

Example:

Used in decision trees and game AI, where multiple possible strategies exist.

Advantages:

✅ Handles AND-OR problems effectively. ✅ Finds optimal solutions for complex scenarios.
✅ Works well in hierarchical problem-solving.
Disadvantages:

❌ Can be complex to implement. ❌ May require strong heuristics for efficiency.


6. Problem Reduction
6.1 Problem Reduction

Problem reduction refers to breaking down a complex problem into smaller, easier-to-solve
subproblems. This technique is used in AI planning, constraint satisfaction, and reasoning.

Examples:

Divide and Conquer algorithms like Merge Sort.


Hierarchical planning in robotics where a large task is broken into smaller steps.
AND-OR graphs used in AO* algorithms to solve problems in stages.

7. Game Playing and Adversarial Search

7.1 Introduction to Game Playing in AI

Game playing in AI involves designing intelligent agents that can compete against an
opponent in a structured environment with well-defined rules. These problems are often
modeled as adversarial search, where multiple agents compete with conflicting goals.

7.2 Characteristics of Game Playing AI

Adversarial Nature: AI must make decisions considering the opponent's possible moves.
Decision Making Under Constraints: Limited time and computational resources require
efficient strategies.
Search-Based Approach: AI explores possible future game states using search
techniques.
Evaluation Function: Used to estimate the desirability of a position when the search
cannot reach a terminal state.

7.3 Types of Games in AI

1. Deterministic vs. Stochastic:


Deterministic: No element of chance (e.g., Chess, Checkers).
Stochastic: Involves randomness (e.g., Backgammon, Poker).
2. Perfect Information vs. Imperfect Information:
Perfect Information: Players see the entire game state (e.g., Chess, Go).
Imperfect Information: Some information is hidden (e.g., Poker).

7.4 Adversarial Search Strategies


Adversarial search differs from normal search because it considers an opponent actively
trying to minimize our chances of winning. The most widely used technique for adversarial
search is the Minimax Algorithm.

7.4.1 Minimax Algorithm

Concept:

The Minimax Algorithm is a decision-making strategy used for turn-based, two-player


games. It assumes that the opponent plays optimally and aims to minimize the AI’s
advantage while the AI tries to maximize its own advantage.

How Minimax Works:

1. Game Tree Construction: The AI generates a tree of possible moves up to a certain


depth.
2. Assign Utility Values: At terminal nodes (game-ending states), the AI assigns a value
based on the desirability of the state.
3. Min and Max Nodes:
Maximizing Player (MAX): Tries to select the move with the highest value.
Minimizing Player (MIN): Tries to select the move with the lowest value (best for the
opponent).
4. Backpropagation of Values: The AI propagates these values back up the tree, assuming
both players play optimally.
5. Best Move Selection: The AI selects the move leading to the best possible outcome.

Example of Minimax in Tic-Tac-Toe:

The AI examines all possible moves and determines which one leads to the best worst-
case scenario.

Limitations of Minimax:

❌ Computationally Expensive: The number of possible game states grows exponentially. ❌


Limited Depth of Search: For complex games, searching the entire game tree is infeasible.

7.4.2 Alpha-Beta Pruning

Concept:

Alpha-Beta Pruning is an optimization for Minimax that reduces the number of nodes
evaluated by the algorithm, making it more efficient.

How Alpha-Beta Pruning Works:

1. Introduces two variables:


Alpha (α): The best value that the maximizing player can guarantee.
Beta (β): The best value that the minimizing player can guarantee.
2. As the search progresses, branches that cannot possibly influence the final decision are
pruned (ignored).
3. This reduces the number of nodes that need to be evaluated, improving efficiency.

Example of Alpha-Beta Pruning:

In Chess, instead of evaluating all possible moves, the AI stops considering a branch
once it determines it will not be chosen.

Advantages of Alpha-Beta Pruning:

✅ Speeds up Minimax significantly. ✅ Maintains the same optimal decision. ✅ Works best
when nodes are explored in the best order (ordering moves wisely helps).

Limitations of Alpha-Beta Pruning:

❌ Does not eliminate the exponential complexity completely. ❌ Effectiveness depends on


the order of move evaluation.

7.5 Problems in Game Playing AI


7.5.1 Challenges in Adversarial Search

1. Huge Search Space:


Games like Chess and Go have an immense number of possible moves.
AI must approximate solutions using heuristics and evaluation functions.
2. Time Constraints:
Many games impose strict time limits, requiring real-time decision-making.
AI uses iterative deepening and move ordering to improve performance.
3. Handling Uncertainty:
Stochastic games introduce randomness (e.g., rolling dice in Backgammon).
AI must incorporate probability and expectimax search to handle chance elements.
4. Imperfect Information Games:
Games like Poker require reasoning under uncertainty (hidden cards).
AI uses techniques like Monte Carlo Tree Search (MCTS) and belief-state modeling.
5. Opponent Modeling:
Predicting the opponent’s strategy is crucial in advanced AI.
AI adapts using reinforcement learning and pattern recognition.

7.6 Advanced Techniques in Game AI


7.6.1 Expectimax Algorithm
Used in stochastic games where chance plays a role.
Instead of assuming an optimal opponent, it considers random events (dice rolls,
shuffled cards).

7.6.2 Monte Carlo Tree Search (MCTS)

A powerful AI strategy used in Go, Poker, and video games.


Uses random simulations to estimate the best move instead of building a full search
tree.

7.6.3 Reinforcement Learning in Game AI

AI learns strategies by playing repeatedly.


Used in DeepMind’s AlphaGo to defeat human champions in Go.
Deep Reinforcement Learning (DRL) enables AI to master games without human data.

7.7 Summary of Game Playing and Adversarial Search

Concept Description

Minimax Standard decision-making


algorithm for adversarial
games.

Alpha-Beta Pruning Optimizes Minimax by


cutting unnecessary
branches.

Expectimax Handles stochastic


(random) elements in
games.

Monte Carlo Tree Search Uses simulations to


estimate the best move.

Reinforcement Learning AI learns game strategies


through trial and error.
7.8 Conclusion
Adversarial search plays a crucial role in game AI, enabling AI agents to compete at a
superhuman level in complex games. Techniques like Minimax, Alpha-Beta Pruning, and
MCTS help AI make better decisions, while machine learning and reinforcement learning
allow AI to adapt and improve over time.

You might also like