KEMBAR78
4-Local search Artificial Intelligent Presentation | PDF
Local search algorithms
Local search algorithms
• In many optimization problems, the state space is the
y p p , p
space of all possible complete solutions
• We have an objective function that tells us how “good”
i t t i d t t fi d th l ti ( l)
a given state is, and we want to find the solution (goal)
by minimizing or maximizing the value of this function
Example: n-queens problem
Example: n queens problem
• Put n queens on an n × n board with no two queens on
q q
the same row, column, or diagonal
• State space: all possible n-queen configurations
• What’s the objective function?
– Number of pairwise conflicts
Example: Traveling salesman
problem
• Find the shortest tour connecting a given set of cities
g g
• State space: all possible tours
• Objective function: length of tour
Local search algorithms
Local search algorithms
• In many optimization problems, the state space is the
y p p , p
space of all possible complete solutions
• We have an objective function that tells us how “good” a
i t t i d t t fi d th l ti ( l) b
given state is, and we want to find the solution (goal) by
minimizing or maximizing the value of this function
• The start state may not be specified
The start state may not be specified
• The path to the goal doesn’t matter
• In such cases, we can use local search algorithms that
keep a single “current” state and gradually try to improve it
Example: n-queens problem
Example: n queens problem
• Put n queens on an n × n board with no two queens on
q q
the same row, column, or diagonal
• State space: all possible n-queen configurations
• Objective function: number of pairwise conflicts
• What’s a possible local improvement strategy?
Move one queen within its column to reduce conflicts
– Move one queen within its column to reduce conflicts
Example: n-queens problem
Example: n queens problem
• Put n queens on an n × n board with no two queens on
q q
the same row, column, or diagonal
• State space: all possible n-queen configurations
• Objective function: number of pairwise conflicts
• What’s a possible local improvement strategy?
Move one queen within its column to reduce conflicts
– Move one queen within its column to reduce conflicts
h = 17
Example: Traveling Salesman
Problem
• Find the shortest tour connecting n cities
g
• State space: all possible tours
• Objective function: length of tour
• What’s a possible local improvement strategy?
– Start with any complete tour, perform pairwise exchanges
Hill-climbing search
Hill climbing search
• Initialize current to starting state
Initialize current to starting state
• Loop:
– Let next = highest-valued successor of current
Let next = highest-valued successor of current
– If value(next) < value(current) return current
Else let current = next
– Else let current = next
• Variants: choose first better successor randomly
• Variants: choose first better successor, randomly
choose among better successors
• “Like climbing mount Everest in thick fog with
• Like climbing mount Everest in thick fog with
amnesia”
Hill-climbing search
Hill climbing search
• Is it complete/optimal?
Is it complete/optimal?
– No – can get stuck in local optima
– Example: local optimum for the 8-queens problem
p p q p
h = 1
The state space “landscape”
• How to escape local maxima?
– Random restart hill-climbing
• What about “shoulders”?
• What about “plateaux”?
Simulated annealing search
• Idea: escape local maxima by allowing some
"bad" moves but gradually decrease their
bad moves but gradually decrease their
frequency
– Probability of taking downhill move decreases with
Probability of taking downhill move decreases with
number of iterations, steepness of downhill move
– Controlled by annealing schedule
• Inspired by tempering of glass, metal
Simulated annealing search
• Initialize current to starting state
• For i = 1 to ∞
• For i = 1 to ∞
– If T(i) = 0 return current
– Let next = random successor of current
Let next random successor of current
– Let Δ = value(next) – value(current)
– If Δ > 0 then let current = next
– Else let current = next with probability exp(Δ/T(i))
Effect of temperature
Effect of temperature
exp(Δ/T)
Δ
Δ
Simulated annealing search
• One can prove: If temperature decreases slowly
h th i l t d li h ill fi d
enough, then simulated annealing search will find
a global optimum with probability approaching one
H
• However:
– This usually takes impractically long
The more downhill steps you need to escape a local
– The more downhill steps you need to escape a local
optimum, the less likely you are to make all of them in a
row
• More modern techniques: general family of
Markov Chain Monte Carlo (MCMC) algorithms
f l i li t d t t
for exploring complicated state spaces
Local beam search
• Start with k randomly generated states
• At each iteration, all the successors of all k states are
generated
• If any one is a goal state, stop; else select the k best
successors from the complete list and repeat
successors from the complete list and repeat
• Is this the same as running k greedy searches in
ll l?
parallel?
Greedy search Beam search

4-Local search Artificial Intelligent Presentation

  • 1.
    Local search algorithms Localsearch algorithms • In many optimization problems, the state space is the y p p , p space of all possible complete solutions • We have an objective function that tells us how “good” i t t i d t t fi d th l ti ( l) a given state is, and we want to find the solution (goal) by minimizing or maximizing the value of this function
  • 2.
    Example: n-queens problem Example:n queens problem • Put n queens on an n × n board with no two queens on q q the same row, column, or diagonal • State space: all possible n-queen configurations • What’s the objective function? – Number of pairwise conflicts
  • 3.
    Example: Traveling salesman problem •Find the shortest tour connecting a given set of cities g g • State space: all possible tours • Objective function: length of tour
  • 4.
    Local search algorithms Localsearch algorithms • In many optimization problems, the state space is the y p p , p space of all possible complete solutions • We have an objective function that tells us how “good” a i t t i d t t fi d th l ti ( l) b given state is, and we want to find the solution (goal) by minimizing or maximizing the value of this function • The start state may not be specified The start state may not be specified • The path to the goal doesn’t matter • In such cases, we can use local search algorithms that keep a single “current” state and gradually try to improve it
  • 5.
    Example: n-queens problem Example:n queens problem • Put n queens on an n × n board with no two queens on q q the same row, column, or diagonal • State space: all possible n-queen configurations • Objective function: number of pairwise conflicts • What’s a possible local improvement strategy? Move one queen within its column to reduce conflicts – Move one queen within its column to reduce conflicts
  • 6.
    Example: n-queens problem Example:n queens problem • Put n queens on an n × n board with no two queens on q q the same row, column, or diagonal • State space: all possible n-queen configurations • Objective function: number of pairwise conflicts • What’s a possible local improvement strategy? Move one queen within its column to reduce conflicts – Move one queen within its column to reduce conflicts h = 17
  • 7.
    Example: Traveling Salesman Problem •Find the shortest tour connecting n cities g • State space: all possible tours • Objective function: length of tour • What’s a possible local improvement strategy? – Start with any complete tour, perform pairwise exchanges
  • 8.
    Hill-climbing search Hill climbingsearch • Initialize current to starting state Initialize current to starting state • Loop: – Let next = highest-valued successor of current Let next = highest-valued successor of current – If value(next) < value(current) return current Else let current = next – Else let current = next • Variants: choose first better successor randomly • Variants: choose first better successor, randomly choose among better successors • “Like climbing mount Everest in thick fog with • Like climbing mount Everest in thick fog with amnesia”
  • 9.
    Hill-climbing search Hill climbingsearch • Is it complete/optimal? Is it complete/optimal? – No – can get stuck in local optima – Example: local optimum for the 8-queens problem p p q p h = 1
  • 10.
    The state space“landscape” • How to escape local maxima? – Random restart hill-climbing • What about “shoulders”? • What about “plateaux”?
  • 11.
    Simulated annealing search •Idea: escape local maxima by allowing some "bad" moves but gradually decrease their bad moves but gradually decrease their frequency – Probability of taking downhill move decreases with Probability of taking downhill move decreases with number of iterations, steepness of downhill move – Controlled by annealing schedule • Inspired by tempering of glass, metal
  • 12.
    Simulated annealing search •Initialize current to starting state • For i = 1 to ∞ • For i = 1 to ∞ – If T(i) = 0 return current – Let next = random successor of current Let next random successor of current – Let Δ = value(next) – value(current) – If Δ > 0 then let current = next – Else let current = next with probability exp(Δ/T(i))
  • 13.
    Effect of temperature Effectof temperature exp(Δ/T) Δ Δ
  • 14.
    Simulated annealing search •One can prove: If temperature decreases slowly h th i l t d li h ill fi d enough, then simulated annealing search will find a global optimum with probability approaching one H • However: – This usually takes impractically long The more downhill steps you need to escape a local – The more downhill steps you need to escape a local optimum, the less likely you are to make all of them in a row • More modern techniques: general family of Markov Chain Monte Carlo (MCMC) algorithms f l i li t d t t for exploring complicated state spaces
  • 15.
    Local beam search •Start with k randomly generated states • At each iteration, all the successors of all k states are generated • If any one is a goal state, stop; else select the k best successors from the complete list and repeat successors from the complete list and repeat • Is this the same as running k greedy searches in ll l? parallel? Greedy search Beam search