KEMBAR78
Computational Intelligence Mid-2 Notes | PDF | Genetic Algorithm | Natural Selection
0% found this document useful (0 votes)
23 views41 pages

Computational Intelligence Mid-2 Notes

Uploaded by

mrdoodood13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views41 pages

Computational Intelligence Mid-2 Notes

Uploaded by

mrdoodood13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT-3

Genetic Algorithm

❖ Genetic Algorithm is search algorithm based on of biological evolution It provides


effective techniques for optimization and machine learning applications.
❖ Genetic Algorithms are good at taking large,potentially huge search spaces and
navigating them, looking for optimal combinations of things, solutions you might not
otherwise find in a lifetime.
❖ Problems of optimization in evolution involve numerous variables acting
simultaneously,There will always be trade-offs.
❖ GA was developed by John Holland,at University of Michigan in the 50s.with the aim to
make computers do what nature does.
❖ The simplest evolutionary algorithm can be viewed as a search procedure that generates
potential solutions to a problem, tests each for suitability, and then generates new
solutions.
❖ This process differs from exhaustive search or blind random search

Evolutionary Computation

The key aspects of genetic algorithm are


Genetic Algorithm General Structure

These algorithms operate on a set of candidate solutions encoded as strings of binary digits or
other data structures.
Five phases are considered in a genetic algorithm.
➢ Initial population
➢ Selection
➢ Crossover
➢ Mutation
➢ Termination

● At the core of a genetic algorithm is the concept of a population, representing a collection


of potential solutions to the problem at hand.
● Each individual within the population corresponds to a particular solution, and a set of
parameters called genes defines its characteristics.
● These genes encode the properties or features of the solution, and they can be represented
as binary strings, real-valued numbers, or other data types.
● The genetic algorithm begins with an initial population of individuals, typically generated
randomly.
It goes through a series of iterations, known as generations or epochs, in which the individuals
undergo operations such as selection, crossover, and mutation.These operations mimic the
processes of natural selection, reproduction, and genetic variation observed in biological
evolution.
Flow chart representation

When Using GA is Recommended?


Some Applications of GA
BASIC GA OPERATORS

Selection – To select set of chromosomes


Crossover - Looking for solutions near existing
Solutions
Mutation - Looking at completely new areas of
search space

Selection: It aims at selecting good individuals as parents for crossovers, where the goodness
of a parent individual is quantified by its fitness. Thus most parent selection schemes focus on
giving more opportunities to the fitter parent individuals than the other individuals and vice versa
such that “good” offspring individuals are likely to be generated.

Selection Procedures

● As Genetic algorithms are heuristic search and optimization techniques inspired by the
principles of natural selection and genetics, selection refers to the process of choosing
individuals from a population to serve as parents for the creation of the next generation.
This decision is influenced by the individuals' fitness - their choice of solving the problem
at hand.
● To make selection possible, a fitness value, which obviously depends on the objective
function, must be defined as fitness function,and each individual’s fitness must be
computed.
● Using the fitness value,GAs uses a selection mechanism to select individuals from the
population to insert into a mating pool. Individuals from the mating pool are used to
generate new offspring, with the resulting offspring forming the basis of the next
generation. A selection mechanism in GA is simply a process that favors the selection of
better individuals in the population for the mating pool.

The following are various selection procedures.

Roulette Wheel Selection:

In this technique, all the chromosomes in the population are placed on the roulette wheel
according to their fitness value. Each individual is assigned a segment of roulette wheel whose
size is proportional to the value of the fitness of the individual.The bigger the fitness value is, the
larger the segment is. Then, the virtual roulette wheel is spinned. The individual corresponding
to the segment on which roulette wheel stops are then selected. The process is repeated until
the desired number of individuals is selected. Individuals with higher fitness have more
probability of selection. But there is no guarantee that good individuals will find their way into
next generation.
Rank Selection:
Rank Selection ranks the population and every chromosome receives fitness from the ranking.
The worst has fitness 1 and the best has fitness N. It results in slow convergence but prevents
too quick convergence. It also keeps up selection pressure when the fitness variance is low.
Tournament Selection: -Tournament selection provides selection pressure by holding a
tournament among s competitors, with s being the tournament size. The winner of the
tournament is the individual with the highest fitness of the tournament competitors. The winner
is then inserted into the mating pool. The mating pool,being comprised of tournament winners,
has a higher average fitness than the average population fitness.The selection pressure is the
degree to which the better individuals are favored: the higher the selection pressure, the more
the better individuals are favored.
Steady State Selection:
In this method, a few good chromosomes are used for creating new offspring in every iteration.
Then some bad chromosomes are removed and the new offspring is placed in their places. The
rest of population migrates to the next generation without going through the selection
process.Main idea of this selection is
that big part of chromosomes should survive to the next generation. In every generation a few
(good - with high fitness) chromosomes for creating a new offspring. Then some (bad - with low
fitness) chromosomes are removed and the new offspring is placed in their place. The rest of
the population survives to the new generation.
Elitism Selection:
The idea here is to arrange the chromosomes in the decreasing order according to their fitness
values. Then apply the selection with each two chromosomes in the arranged set. In this way,
Genetic Algorithm will be
applied between strong chromosomes or between weak chromosomes. This means there is no
chance to apply Genetic Algorithm between weak and strong chromosomes.Elitism is a kind of
selection in which the best individual passed to the next generation as such without any
modification. Elitism prevents the best individual to undergo the reproduction process so as to
pass them without any modification into the next generation.
Note
In general it is observed that the genetic algorithm with elitism provides a more optimal solution
and has better convergence speed.

Variation

Crossover and mutation are Variation operators that provide the means for searching the
solution space for improved solutions, or potentially for weaker solutions that could lead to
improved solutions.There are many traditional variation operators, such as
binary mutation, Gaussian mutation, or one-point, n-point, or uniform crossovers.The choice of
variation operators also depends on the choice of representation.

Crossover Operators:
It resembles the reproduction mechanism in nature. Thus they, with mutation operators, are
collectively called reproductive operators. In general, a crossover operator combines two
individuals to form a new individual. It tries to split an individual into parts and then assemble
those parts into a new individual.
crossover operators are representation-dependent.
a list of classic crossover operators is listed as follows:
• One Point Crossover: One point crossover is a commonly used crossover operator because
of its simplicity. Given two individuals, it randomly chooses a cut point in their genomes. Then it
swaps the parts after (or before) the cut point between the two genomes.
• Two Points Crossover: Two points crossover is another commonly used crossover operator
because people argue that one point crossover has a positional bias toward the terminal
positions.For instance, when making a one point crossover, the rightmost (or leftmost) part is
always swapped. Thus people propose two point crossovers to avoid the positional bias.
• Uniform Crossover: Uniform crossover is a general one. Each gene is given an equal
probability to be swapped.
• Blend Crossover: Blend crossover is commonly used in real number optimization. Instead of
swapping genes, it tries to blend two genes together by arithmetic averaging to obtain the
intermediate values. For instance, if we are going to make a crossover between two vectors [1 2
3]and [4 5 6], then the blended vector will be [2.5 3.5 4.5]. Weights can be applied here.

Mutation Operators
Mutation operators resemble the mutation mechanism in which some parts of genome undergo
random changes in nature. Thus, as a typical modeling, a mutation operator changes parts of
the genome of an individual probabilistically. Similar to crossover operators, mutation operators
are representation dependent.
a list of commonly used mutation operators is shown below:
• Bitflip Mutation: It is commonly used in binary genomes. Specified by a pre-defined
probability,each bit in a binary genome is probabilistically inverted.
• Random Mutation: Random mutation is generalized from bitflip mutation. It can be applied in
many genomes. Specified by a pre-defined probability, each part in a genome is probabilistically
changed to a random value within domain bounds.
• Delta Mutation: Delta mutation is commonly used in real number genomes. Specified by a
predefined probability, each real number in a real number genome is probabilistically
incremented/decremented by a certain step size (called delta), where the step size is
pre-specified.Nonetheless, it is straightforward to make the step size adaptive, similar to the trial
vectorgenerations in differential evolution.
• Gaussian Mutation: Gaussian mutation is also commonly used in real number genomes.
Similar to delta mutation, each real number in a real number genome is probabilistically
increased / decreased by a step size. The difference is that the step size is a Gaussian random
number.

Evolutionary Optimization

In solving optimization problems,a problem to be solved must be well defined which means that
any possible solution to the problem must be comparable to another possible solution.
Most often, the comparisons between two or more candidate solutions are based on quantitative
measures of how well a proposed solution meets the needs of the problem.
evolutionary optimization, is a quantitative technique in which there is a numeric description—a
function—that operates on a potential solution and returns either a single real number or
multiple numbers that describe the value of the solution.
Within evolutionary optimization, as with all engineering, there are essentially two forms of
optimization problems.
One form is numeric.
For example, find the point (x, y) such that
f(x, y)= x2 + y2 is minimized.
The other form is combinatorial. For example, given a collection of tools, each with a certain
weight, find the combination that has the greatest combined weight that will fit in a bag that can
hold only 25% of the weight of all the tools combined. This is a variant of the knapsack problem.
Note:Traveling Sales Person is also a combinatorial optimization problem.

consider the problem


the case of searching for a point x ∈ ℜ, such that f(x) = x2 is minimized.
A Canonical Example in Two or More Dimensions

In case of more dimensions in the problem,


Evolution versus Gradient Methods
COMBINATORIAL OPTIMIZATION

Consider the traveling salesman problem.


The problem is as follows.
There are n cities. The salesman starts at one of these cities and must visit each other city once
and only once and then return home. The salesman wants to do this in the shortest distance.
The problem then is to determine the best ordering of the cities to visit. This is a difficult problem
because the total number of possible solutions increases as a factorial function of the number of
cities.
The traveling salesman problem is NP-hard. There are no known methods for generating
solutions in a time that scales as a polynomial function of the number of cities, n. There are
heuristics that are available for this canonical form of the traveling salesman problem, and some
of these can be effective even for large n given certain cases.
an evolutionary approach to the problem is as follows
The process will be repeated for n number of generations.
adding in the possibility of mutating offspring via inversion allows the evolutionary search to
proceed toward improved solutions.

Convergence in Evolutionary Computing


Convergence with Probability
Evolutionary algorithms can be constructed in the framework of a Markov chain, which provides
the mathematical tools to show that these algorithms can converge to an optimum solution with
probability 1.The essence of it is to consider different configurations of an evolving population to
be states in a Markov chain.
Premature Convergence Evolutionary algorithms that rely predominantly on crossover or
other recombination methods to generate offspring can sometimes suffer from what is termed
premature convergence, which occurs when a population becomes homogeneous at a solution
that is not the global optimum (or is less than desirable). The term is often used incorrectly to
describe the effect of converging to a local optimum, but the origin of the term applies directly to
the case in which no further progress is likely because the population lacks diversity, which
effectively stalls an evolutionary algorithm that is heavily reliant on crossover or recombination.
(We saw an example of premature convergence when exploring the use of the PMX operator on
the traveling salesman problem. The PMX operator became ineffective when the population
became homogeneous.)
In some studied cases of evolutionary algorithms that rely heavily on crossover, the likelihood of
converging prematurely to a given solution has been shown to be related to the quality of the
solution (i.e., there was more likelihood of stalling at a point if that point was of higher quality)
The most common methods for overcoming premature convergence include restarting from a
new randomized population, using heuristics to move the population to a new collection of
points (e.g., by hill climbing), or redesigning the approach.

Representation in Evolutionary Computing


Designing an evolutionary optimization algorithm requires choices of representation.
selection, and variation operators.

CONSTRAINT HANDLING
Almost all real-world problems are constrained problems. when applying evolutionary algorithms
it is important to consider how to treat the constraints of the problem. Some of these constraints
are part of the objective, whereas some are part of the parameters of a solution and therefore
impose boundary conditions on the search space. Each of these may pose hard or soft
constraints.
A hard constraint is one that, if violated, makes the entire proposed solution worthless.
For example, suppose you are designing a new golf club for professional
golfers. The United States Golf Association (USGA) requires that all golf clubs be at
least 18 in. in length and not more than 48 in. in length. These are hard constraints on
your design because anything outside of these limits cannot be used in regulation play.

A soft constraint is one that can be violated, but there is some imposed penalty for
violating it, and perhaps the penalty increases with the degree of violation. For
example, suppose you are creating a schedule for fuel trucks to refuel gas stations. A
constraint would be to design a schedule in which no station is ever empty, which
would mean the oil company would have customers waiting at the gas pumps for a
refueling truck. But this is not likely to be a hard constraint. If you crafted a schedule
that had a gas station out of fuel for 1 min, it might lead to some bad publicity. In fact,
it would be more likely to add to bad publicity the longer the station remained empty.

It is often helpful to craft an objective function that comes in two parts. The
first involves the primary criteria of interest and the second involves a penalty
function that treats constraint violations

SELF-ADAPTATION

Self-adaptation mechanisms in evolutionary algorithms allow dynamic parameter adjustment


during optimization. This enhances flexibility and efficiency, reducing manual tuning needs.
Inspired by biological systems, these mechanisms improve algorithm robustness across
different problems.
Self-adaptation can modify mutation rates, crossover rates, population sizes, and even genetic
representations. Implementation strategies include endogenous and exogenous approaches,
with direct or indirect parameter encoding.

Note there is an optimum step size(that decides mutation or crossover) for which the likelihood
of success is not maximized, but the rate of progress toward the optimum is maximized.

the 1/5 rule: The ratio of successful mutations to all mutations should be about 1/5.
UNIT-4

Evolutionary Learning

“prediction is the keystone of intellect”

When you can predict what is coming next, then you can claim a degree of
understanding and taking action to have a more desired outcome.
From that perspective,intelligence is the property that allows a system to allocate resources to
meet goals in a range of environments. Such a system can be a person, a group or any other
species.
Evolutionary learning applies evolutionary algorithms to address optimization problems in
machine learning, and is suitable for many applications.
Many machine learning tasks involve solving complex optimization problems, such as working
on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can
prove difficult to even define an explicit objective function.Evolutionary learning algorithms can
be applied to complex problems in place of traditional machine learning algorithms.

EVOLVING PARAMETERS OF A REGRESSION EQUATION

● The evolving parameters of a regression equation refer to the way the coefficients or
parameters of a regression model change over time or with different data inputs.
● A regression equation is used to describe the relationship between one dependent
variable and one or more independent variables. As the model is trained, refined, or
applied to new data, the parameters may evolve to better fit the data.
It would be useful to have a method for optimizing the
coefficients of models that could be responsive to any arbitrary cost function. In this
regard, evolutionary algorithms can be particularly useful.
EVOLVING THE STRUCTURE AND PARAMETERS
OF INPUT–OUTPUT SYSTEMS
Input–output modeling is central to many scientific activities.
maxim of parsimony(Occam’s Razor)

engineers and other practitioners often use a so-called maxim of parsimony, also known as
Occam’s Razor, to help design mathematical models.
This maxim, as paraphrased by Albert Einstein, says to keep models as simple as possible and
no simpler.4 If a model is too simple, it will not be sufficient to explain the process that is being
modeled. For example, it’s not possible to make a mathematical model of a falling object with
high fidelity using only a linear function.As we know from physics class, this requires a quadratic
function.
On the other hand,
mathematical models can be made so complex that they can “explain” all of the available
historical data, but what they really do in those cases is fit the noise in the data—perfectly—and
thus these overly complex models tend to predict poorly because future data are by definition
uncorrelated to the noise in past data.
These aspects of mathematical modeling are very important generally, and quite pertinent in the
application of evolutionary algorithms for predictive modeling.
Evolving ARMA(X) Models

This is a time series prediction model

Time series prediction models provide an opportunity to illustrate an important


point in evolutionary computation. The effectiveness of a particular variation operator
depends directly on the match between the representation and the evaluation function.
Evolutionary neural networks

Neural networks can provide a convenient representation for non linear functions .
They can be used for prediction and classification problems.

The evolutionary approach includes initializing some n number of parent neural networks, each
initialized with uniformly randomized weights and biases from [2, 2]. Offspring were created from
the parents by adding a Gaussian random variable with zero mean and variance equal to the
mean squared error of the parent to every weight in the parent network.The process
generated one offspring from each parent and was iterated for a specific number of generations.

EVOLVING CLUSTERS
Clustering is the activity of recognizing similarities between observed data and then associating
different data together as a group.
Also important is recognizing differences between observed clusters, so that the within-group
variance is much smaller than the between-group variance.

Evolutionary algorithms can be applied usefully to clustering .


The concept of Evolutionary Clustering proposed by Simpson is related to clustering methods in
data analysis, particularly in the context of dynamic or evolving data. In traditional clustering, the
goal is to group data points into clusters based on certain similarity or distance metrics, but
evolutionary clustering adds an extra layer of complexity by considering temporal dynamics and
data evolution over time.

Evolutionary clustering techniques aim to address the issue where data clusters evolve or
change over time. The central idea is that data patterns and relationships are not static, and
therefore clustering solutions should reflect this evolving nature.

This can be useful in contexts such as:

● Time-series data: where the relationships between data points change over time.
● Streaming data :where new data points are continuously arriving and old data may
become less relevant.
● Real-world systems where entities or behaviors change dynamically (e.g., user
preferences, stock prices, social networks).

Procedure According to Fogel

a population of 50 parents was created in which each solution encoded a complete clustering: a
set of hyperboxes (in this case, rectangles),represented as a five-tuple:
Evolving Control Systems

Evolutionary Control Systems use evolutionary computing techniques, such as Genetic


Algorithms (GAs), Genetic Programming (GP), or Evolution Strategies (ES), to design, optimize,
or tune controllers for dynamic systems. These approaches are particularly useful for systems
with nonlinear, complex, or poorly understood dynamics.

Cart–Pole Systems
The Cart-Pole System is a classic problem in control theory and reinforcement learning. It
involves a pole attached to a cart, where the cart moves along a track, and the goal is to
balance the pole in an upright position by applying forces to the cart. This system is a
benchmark for studying the control of nonlinear and unstable systems.

The following figure shows a single pole atop a cart that is placed on a track. The goal in this
problem is to push and pull on the cart so as to keep the pole from falling over (or
falling beyond a threshold number of degrees) while simultaneously keeping the cart
from hitting either end of the track.

The single-pole system is controllable with a linear system of the position and velocity of the cart
and the angle and angular velocity of the pole This is given by
Multiple Poles in Cart-pole system

Truck Backer–Upper

The Truck Backer-Upper Problem is a classic problem in robotics and control systems that
involves guiding a truck and its attached trailer(s) to a specific target position and orientation,
typically in reverse, while avoiding obstacles and adhering to physical constraints. This problem
is widely studied because it models real-world challenges in motion planning, control, and
optimization.
System Components

1. Truck: A vehicle capable of forward and backward motion with steering controls.
2. Trailer: A rigid body attached to the truck via a hinge, which pivots based on the truck's
movement.
3. Environment: A 2D plane with:
○ A starting position and orientation for the truck and trailer.
○ A target position and orientation for the truck and trailer.
○ Obstacles or boundaries (optional).

EVOLUTIONARY GAMES

Games are characterized by rules that govern the behavior of one or more players.Behavior in
games is a matter of stimulus (response), for a given state.
This aspect can be applied to many real life situations.
games are very general and life itself can be viewed as a game, with each person having a
complex time-varying objective function that describes something fuzzy, like overall
happiness/success.
Intelligence itself has been described in terms of games: the ability to make appropriate
decisions in order to adapt behavior to meet specific goals in a range of environments.
Evolution in these cases is often conducted in a format called coevolution. Instead of evolving
solutions against a fixed objective function, solutions are judged in terms of how well they
compete against other solutions in the same or a different population. Thus, the objectives may
change as the players themselves change.

Iterated Prisoner’s Dilemma


The game is structured with two players, each having two options:cooperate with each other or
defect against each other.
These options are described typically with the symbols C and D.
The rationale for the game, and why it’s called a prisoner’s dilemma, comes from imaging two
prisoners who have been caught for a crime.
During separate interrogation, the prosecutor offers each a lesser sentence for “ratting out” his
cohort in crime.If neither criminal rats out his partner, then the sentence will be pretty low for
each one (as the evidence the prosecutor has isn’t that strong).
If one criminal defects against the other, while the other refuses to defect, then the rat goes
free and the other criminal gets a long sentence (based on the rat’s testimony).But if both
criminals rat each other out, then they each get somewhat long sentences.
the prisoner’s dilemma is often recast as follows:

Thinking about the game logically,A player may think it the following way.
If the other player cooperates, you will do better if you defect. If the other player defects, you
again will do better if you defect. Therefore, you should defect. Of course, the other player
would have the same thoughts and thus both of you would choose to defect and get only 1 point
each, when you each could have cooperated and received 3 points each. Research has shown
that when this game is iterated over many plays between two players, the propensity for mutual
defection is often reduced. Thus, the iterated prisoner’s dilemma (IPD) is fundamentally different
from the one-shot prisoner’s dilemma.
Within evolutionary game theory, the analysis of the prisoner’s dilemma is as an iterative game
with the repetitive nature affording competitors the possibility of retaliating or defecting based on
the results of previous rounds of the game.

UNIT 5

collective intelligence is a kind of wisdom and knowledge that grows out of a group.
The concept of collective intelligence states that when people/species work together, they form
a type of intelligence that simply cannot exist on the individual level.

Collective intelligence is therefore shared or group intelligence that emerges from the
collaboration, collective efforts, and competition of many individuals and appears in consensus
decision making.

Particle Swarm Optimization(PSO)

Particle swarm optimization (PSO),is a population-based optimization approach which models


the flocking behavior we see in certain animals.
Swarm refers to a moving group.
When considering how a flock of birds or a school of fish moves, it’s easy to think that
the actions of each individual are somehow coordinated with the actions of the others.
This behavior can be simulated in solving some kind of problems (there is a collection of
possible solutions to a problem)using computers.
modeling the way birds flock, or fish school, or things in general “swarm” can lead to an
optimization algorithm. The organisms that we might model in these cases are likely optimizing
something, such as group cohesiveness versus individual effort.
Particle swarm optimization is an approach to mimic the behavior of an intelligent swarm which
deals with a collection of possible candidate solutions where each solution is referred to as a
particle.

❖ They each have a velocity vector, denoted by vi (t), where vi is the velocity of the ith
particle at a particular time t.
❖ their velocities will change as a function of what is known about other particles in the
collective, and also what a particle remembers about where it has been.
❖ Each particle is given a memory. It remembers the location that has yielded the best
result from the objective function. Each particle also has knowledge about the results of
other particles in a neighborhood
❖ Each particle knows the location of the particle in its neighborhood that has the best
result from the objective function.
Note: In PSO the neighborhood can be small, like a particle and its two closest neighbors, or it
can expand to be the entire collection of particles.
➢ That information is used to change the velocities of the particles and thereby having
them move to different locations, searching for a better location.
➢ Each particle’s new velocity is a function of (i) its current velocity, (ii) the vector
that points to the particle’s own best location, and (iii) the vector that points to the best
location of the particles in its neighborhood.
➢ The vectors that point to known best locations are weighted by random variables.

Each particle then moves according to its new velocity.

Generally a limit will be placed on each dimension.and there will be 10-50 particles but the
appropriate size is problem dependent, and the choice of how to construct a neighborhood for
each particle is also problem dependent.

Differential Evolution

Differential evolution, searches a landscape for optima by using different vectors between
existing solutions.

-> Differential evolution is an evolutionary algorithm which is a new heuristic approach for
minimizing possibly nonlinear and non-differentiable continuous space functions .
-> population-based search algorithm that relies on updating the location of the
individuals in the population.
->It was introduced in Storn and Price
The key ingredient in differential evolution is that individuals move based on the
differential vectors from the individual to other individuals in the population.
The population size needs to be at least four individuals
and as with all evolutionary or related methods, the population is initialized at random or based
on some prior knowledge of where to search for a solution to the problem posed

Each of the individuals in the population is subjected first to mutation, which is


based on the individual’s relationship to three other distinct individuals in the
population, chosen at random each time. Let’s call these three individuals x1, x2,
and x3, and let’s call the individual that we are mutating x0
Ant Colony Optimization(ACO)

Introduction
ACO is inspired by the foraging behavior of ant colonies, and targets discrete optimization
problems.
Real ants are capable of finding the shortest path from a food source to their colony without
using visual cues.
Also, they are capable of adapting to changes in the environment, for example finding a new
shortest path once the old one is no longer feasible due to a new obstacle
The primary means for ants to form and maintain the line is a pheromone trail. Ants deposit a
certain amount of pheromone while walking, and each ant probabilistically prefers to follow a
direction rich in pheromone. This elementary behavior of real ants can be used to explain how
they can find the shortest path that reconnects a broken line after the sudden appearance of an
unexpected obstacle has interrupted the initial path
The main idea is that the self-organizing principles which allow the highly coordinated behavior
of real ants can be exploited to coordinate populations of artificial
agents that collaborate to solve computational problems.
For example, a foraging ant deposits a chemical on the ground which increases the probability
that other ants will follow the same path.
stigmergy, a form of indirect communication mediated by modifications of the environment is
used by ants (and many other insects)

Ant Colony Optimization (ACO) is a metaheuristic optimization technique inspired by the


foraging behavior of ants. It was first introduced by Italian researchers Marco Dorigo and Luca
Maria Gambardella in 1996. ACO is a powerful algorithm that has been applied to a wide range
of optimization problems, including the traveling salesman problem, job scheduling, and vehicle
routing, among others.
The ACO algorithm works by simulating the behavior of ants as they search for food. Ants
deposit pheromone on the ground as they move around, and other ants can sense the
pheromone and follow the path to the food source. In ACO, the pheromone represents the
quality of a solution, and the ants represent the optimization process.
ACO is a population-based algorithm that works by iteratively constructing solutions using a set
of artificial ants.
At each iteration, the ants construct a solution by moving from one component of the solution to
another based on a probabilistic decision rule.
The decision rule is based on the pheromone levels and the heuristic information.
Once all the ants have constructed a solution, the pheromone levels on the components of the
solutions are updated based on the quality of the solutions.

The pheromone evaporates over time, so the algorithm gradually shifts the search towards the
most promising regions of the solution space.

The traveling salesman problem (TSP) is the problem of finding a shortest closed tour
which visits all the cities in a given set exactly once.

ACO algorithm can be applied to solve this problem

These are three ideas from natural ant behavior that we have transferred to the artificial ant
colony: (i) the preference for paths with a high pheromone level, (ii) the higher rate of growth
of the amount of pheromone on shorter paths, and (iii) the trail mediated communication among
Ants.

1.an artificial ant is an agent which moves from city to city on a TSP graph. It
chooses the city to move to using a probabilistic function both of trail accumulated on edges
and of a heuristic value,which was chosen here to be a function of the edges length.
2.Artificial ants probabilistically prefer cities that are connected by edges with a lot of
pheromone trail and which are close-by.
3.Initially, m artificial ants are placed in randomly selected cities.
4.At each time step they move to new cities and modify the pheromone trail on the edges used
–this is termed local trail updating.
5.When all the ants have completed a tour the ant that made the shortest tour modifies the
edges belonging to its tour –termed global trail updating–
by adding an amount of pheromone trail that is inversely proportional to the tour length.
There are many different ways to translate the above principles into a computational system
apt to solve the TSP.

Here is an algorithm.

For the first ant to start it's trek.The probability of it visiting any of the other available city is given
by

Ant colony optimization has been applied to a diverse set of engineering problems.
For example, the method has been employed to construct neural network topologies
fuzzy controllers,decision trees and fingerprint analysis.

EVOLVABLE HARDWARE

Evolvable hardware (EH) is a field focusing on the use of evolutionary algorithms (EA) to create
specialized electronics without manual engineering. It brings together reconfigurable hardware,
evolutionary computation, fault tolerance and autonomous systems.

★ Some of the earliest experiments in evolutionary computation accomplished by a group


in Berlin, Germany, involved manipulating physical devices
★ This was performed by constructing a device, such as series of planes attached
at hinges with measured protractor settings , and then throwing dice or using a
pachinko-like device to generate a random number, and making random changes to the
physical structures.Scoring the set of hinged plates was accomplished in a de facto wind tunnel,
where the drag of the plates was measured using a pitot tube. The objective was to adjust the
angles between the plates such that the overall set of plates would have minimum drag.
★ More complicated experiments involved evolving the mechanical design of a bent
pipe and a nozzle that would offer maximum energy efficiency of fluids flowing
through the devices.
★ More recently, experiments in Lohn et al. [2015] describe the evolution of S-band
omnidirectional and medium gain antennas, which were evolved in simulation and
then verified in real-world settings. Figure 13.9 shows a sample design of the antenna,
which is unconventional. The final evolved designs were launched into space as part
of NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE), which
launched on September 6, 2013 and orbited the moon from October 2013 through
April 2014. The antennas provided “65% increased downlink coverage and 44% cost
savings for the mission”
Other interests in evolving hardware have focused on adapting field programmable gate
arrays(FPGAs) [Thompson, 1996, 1998; Stoica et al., 2003] and recovering from faults that may
occur [Greenwood, 2005]. Additional research has been performed in evolving electronic circuits
in simulation

INTERACTIVE EVOLUTIONARY COMPUTATION

Interactive evolutionary computation (IEC) or aesthetic selection is a general term for methods
of evolutionary computation that use human evaluation. Usually human evaluation is necessary
when the form of fitness function is not known (for example, visual appeal or attractiveness]) or
the result of optimization should fit a particular user preference (for example, taste of coffee or
color set of the user interface).
One of the first practical applications of interactive evolutionary computation was in the area of
police sketch artistry where someone can say whether or not a given face looks more or less
like the suspect he/she has in mind and the algorithm can adjust that face until it comes close to
the person’s mental image.
Interactive evolutionary algorithms have been used in designing ergonomic
systems, portions of video games personal hearing aids and even finding a nice blend of coffee

MULTICRITERIA EVOLUTIONARY OPTIMIZATION

Involve more than one objective function that are to be minimized or maximized(usually
conflicting)
Answer is a set of solutions that define the best tradeoff between competing objectives.
Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios.
Evolutionary algorithms possess several characteristics that are desirable for this type of
problem.
● In the single-objective optimization problem,the superiority of a solution over other
solutions are easily determined by comparing their objective function values.
One approach to handling multiple criteria is to combine them in a single utility
function that returns a real value (which is not possible for some problems)

For example, a financial asset management algorithm might be measured


in terms of the return on investment, but also in terms of its volatility (such as the
annualized standard deviation of its monthly returns). In this case, we’d like the ROI to
be high, but we also want the volatility (which is a way of measuring risk) to be low.
One approach to handling multiple criteria is to combine them in a single utility
function that returns a real value. For example, we could say that the value of the asset
management algorithm was determined by
f(x) = ax1 / x2
where x1 is the ROI, x2 is the volatility, and a is a scaling constant. As ROI increases,
so does f(x). As volatility decreases, f(x) increases. So, the best values of f(x) will
represent some trade-off between ROI and volatility

In multi-objective optimization problems, the goodness of a solution is determined by the


dominance.
A solution dominates another when it is equally good or better in all measurable criteria, but
better in at least one criterion.

There may be many nondominated solutions for a given multicriteria problem.

The entire set of these solutions is called the Pareto set.This is the set of solutions
for which, for any solution in the set, no objective can be improved without reducing
performance with respect to a different objective.
an algorithm that gives a large number of alternative solutions lying on or near the
Pareto-optimal front is of great practical value.

Evolutionary algorithms have been applied to this problem for many years .

Advantages of GAs over Traditional Methods

Traditional optimization methods operate on a candidate solution


Our desired answer: a set of solutions
Genetic algorithms fundamentally operate on a set of candidate solutions
One interesting approach to the problem is called NSGA-II which stands for nondominated
sorting genetic algorithm.

You might also like