Computational Intelligence Mid-2 Notes
Computational Intelligence Mid-2 Notes
Genetic Algorithm
Evolutionary Computation
These algorithms operate on a set of candidate solutions encoded as strings of binary digits or
other data structures.
Five phases are considered in a genetic algorithm.
➢ Initial population
➢ Selection
➢ Crossover
➢ Mutation
➢ Termination
Selection: It aims at selecting good individuals as parents for crossovers, where the goodness
of a parent individual is quantified by its fitness. Thus most parent selection schemes focus on
giving more opportunities to the fitter parent individuals than the other individuals and vice versa
such that “good” offspring individuals are likely to be generated.
Selection Procedures
● As Genetic algorithms are heuristic search and optimization techniques inspired by the
principles of natural selection and genetics, selection refers to the process of choosing
individuals from a population to serve as parents for the creation of the next generation.
This decision is influenced by the individuals' fitness - their choice of solving the problem
at hand.
● To make selection possible, a fitness value, which obviously depends on the objective
function, must be defined as fitness function,and each individual’s fitness must be
computed.
● Using the fitness value,GAs uses a selection mechanism to select individuals from the
population to insert into a mating pool. Individuals from the mating pool are used to
generate new offspring, with the resulting offspring forming the basis of the next
generation. A selection mechanism in GA is simply a process that favors the selection of
better individuals in the population for the mating pool.
In this technique, all the chromosomes in the population are placed on the roulette wheel
according to their fitness value. Each individual is assigned a segment of roulette wheel whose
size is proportional to the value of the fitness of the individual.The bigger the fitness value is, the
larger the segment is. Then, the virtual roulette wheel is spinned. The individual corresponding
to the segment on which roulette wheel stops are then selected. The process is repeated until
the desired number of individuals is selected. Individuals with higher fitness have more
probability of selection. But there is no guarantee that good individuals will find their way into
next generation.
Rank Selection:
Rank Selection ranks the population and every chromosome receives fitness from the ranking.
The worst has fitness 1 and the best has fitness N. It results in slow convergence but prevents
too quick convergence. It also keeps up selection pressure when the fitness variance is low.
Tournament Selection: -Tournament selection provides selection pressure by holding a
tournament among s competitors, with s being the tournament size. The winner of the
tournament is the individual with the highest fitness of the tournament competitors. The winner
is then inserted into the mating pool. The mating pool,being comprised of tournament winners,
has a higher average fitness than the average population fitness.The selection pressure is the
degree to which the better individuals are favored: the higher the selection pressure, the more
the better individuals are favored.
Steady State Selection:
In this method, a few good chromosomes are used for creating new offspring in every iteration.
Then some bad chromosomes are removed and the new offspring is placed in their places. The
rest of population migrates to the next generation without going through the selection
process.Main idea of this selection is
that big part of chromosomes should survive to the next generation. In every generation a few
(good - with high fitness) chromosomes for creating a new offspring. Then some (bad - with low
fitness) chromosomes are removed and the new offspring is placed in their place. The rest of
the population survives to the new generation.
Elitism Selection:
The idea here is to arrange the chromosomes in the decreasing order according to their fitness
values. Then apply the selection with each two chromosomes in the arranged set. In this way,
Genetic Algorithm will be
applied between strong chromosomes or between weak chromosomes. This means there is no
chance to apply Genetic Algorithm between weak and strong chromosomes.Elitism is a kind of
selection in which the best individual passed to the next generation as such without any
modification. Elitism prevents the best individual to undergo the reproduction process so as to
pass them without any modification into the next generation.
Note
In general it is observed that the genetic algorithm with elitism provides a more optimal solution
and has better convergence speed.
Variation
Crossover and mutation are Variation operators that provide the means for searching the
solution space for improved solutions, or potentially for weaker solutions that could lead to
improved solutions.There are many traditional variation operators, such as
binary mutation, Gaussian mutation, or one-point, n-point, or uniform crossovers.The choice of
variation operators also depends on the choice of representation.
Crossover Operators:
It resembles the reproduction mechanism in nature. Thus they, with mutation operators, are
collectively called reproductive operators. In general, a crossover operator combines two
individuals to form a new individual. It tries to split an individual into parts and then assemble
those parts into a new individual.
crossover operators are representation-dependent.
a list of classic crossover operators is listed as follows:
• One Point Crossover: One point crossover is a commonly used crossover operator because
of its simplicity. Given two individuals, it randomly chooses a cut point in their genomes. Then it
swaps the parts after (or before) the cut point between the two genomes.
• Two Points Crossover: Two points crossover is another commonly used crossover operator
because people argue that one point crossover has a positional bias toward the terminal
positions.For instance, when making a one point crossover, the rightmost (or leftmost) part is
always swapped. Thus people propose two point crossovers to avoid the positional bias.
• Uniform Crossover: Uniform crossover is a general one. Each gene is given an equal
probability to be swapped.
• Blend Crossover: Blend crossover is commonly used in real number optimization. Instead of
swapping genes, it tries to blend two genes together by arithmetic averaging to obtain the
intermediate values. For instance, if we are going to make a crossover between two vectors [1 2
3]and [4 5 6], then the blended vector will be [2.5 3.5 4.5]. Weights can be applied here.
Mutation Operators
Mutation operators resemble the mutation mechanism in which some parts of genome undergo
random changes in nature. Thus, as a typical modeling, a mutation operator changes parts of
the genome of an individual probabilistically. Similar to crossover operators, mutation operators
are representation dependent.
a list of commonly used mutation operators is shown below:
• Bitflip Mutation: It is commonly used in binary genomes. Specified by a pre-defined
probability,each bit in a binary genome is probabilistically inverted.
• Random Mutation: Random mutation is generalized from bitflip mutation. It can be applied in
many genomes. Specified by a pre-defined probability, each part in a genome is probabilistically
changed to a random value within domain bounds.
• Delta Mutation: Delta mutation is commonly used in real number genomes. Specified by a
predefined probability, each real number in a real number genome is probabilistically
incremented/decremented by a certain step size (called delta), where the step size is
pre-specified.Nonetheless, it is straightforward to make the step size adaptive, similar to the trial
vectorgenerations in differential evolution.
• Gaussian Mutation: Gaussian mutation is also commonly used in real number genomes.
Similar to delta mutation, each real number in a real number genome is probabilistically
increased / decreased by a step size. The difference is that the step size is a Gaussian random
number.
Evolutionary Optimization
In solving optimization problems,a problem to be solved must be well defined which means that
any possible solution to the problem must be comparable to another possible solution.
Most often, the comparisons between two or more candidate solutions are based on quantitative
measures of how well a proposed solution meets the needs of the problem.
evolutionary optimization, is a quantitative technique in which there is a numeric description—a
function—that operates on a potential solution and returns either a single real number or
multiple numbers that describe the value of the solution.
Within evolutionary optimization, as with all engineering, there are essentially two forms of
optimization problems.
One form is numeric.
For example, find the point (x, y) such that
f(x, y)= x2 + y2 is minimized.
The other form is combinatorial. For example, given a collection of tools, each with a certain
weight, find the combination that has the greatest combined weight that will fit in a bag that can
hold only 25% of the weight of all the tools combined. This is a variant of the knapsack problem.
Note:Traveling Sales Person is also a combinatorial optimization problem.
CONSTRAINT HANDLING
Almost all real-world problems are constrained problems. when applying evolutionary algorithms
it is important to consider how to treat the constraints of the problem. Some of these constraints
are part of the objective, whereas some are part of the parameters of a solution and therefore
impose boundary conditions on the search space. Each of these may pose hard or soft
constraints.
A hard constraint is one that, if violated, makes the entire proposed solution worthless.
For example, suppose you are designing a new golf club for professional
golfers. The United States Golf Association (USGA) requires that all golf clubs be at
least 18 in. in length and not more than 48 in. in length. These are hard constraints on
your design because anything outside of these limits cannot be used in regulation play.
A soft constraint is one that can be violated, but there is some imposed penalty for
violating it, and perhaps the penalty increases with the degree of violation. For
example, suppose you are creating a schedule for fuel trucks to refuel gas stations. A
constraint would be to design a schedule in which no station is ever empty, which
would mean the oil company would have customers waiting at the gas pumps for a
refueling truck. But this is not likely to be a hard constraint. If you crafted a schedule
that had a gas station out of fuel for 1 min, it might lead to some bad publicity. In fact,
it would be more likely to add to bad publicity the longer the station remained empty.
It is often helpful to craft an objective function that comes in two parts. The
first involves the primary criteria of interest and the second involves a penalty
function that treats constraint violations
SELF-ADAPTATION
Note there is an optimum step size(that decides mutation or crossover) for which the likelihood
of success is not maximized, but the rate of progress toward the optimum is maximized.
the 1/5 rule: The ratio of successful mutations to all mutations should be about 1/5.
UNIT-4
Evolutionary Learning
When you can predict what is coming next, then you can claim a degree of
understanding and taking action to have a more desired outcome.
From that perspective,intelligence is the property that allows a system to allocate resources to
meet goals in a range of environments. Such a system can be a person, a group or any other
species.
Evolutionary learning applies evolutionary algorithms to address optimization problems in
machine learning, and is suitable for many applications.
Many machine learning tasks involve solving complex optimization problems, such as working
on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can
prove difficult to even define an explicit objective function.Evolutionary learning algorithms can
be applied to complex problems in place of traditional machine learning algorithms.
● The evolving parameters of a regression equation refer to the way the coefficients or
parameters of a regression model change over time or with different data inputs.
● A regression equation is used to describe the relationship between one dependent
variable and one or more independent variables. As the model is trained, refined, or
applied to new data, the parameters may evolve to better fit the data.
It would be useful to have a method for optimizing the
coefficients of models that could be responsive to any arbitrary cost function. In this
regard, evolutionary algorithms can be particularly useful.
EVOLVING THE STRUCTURE AND PARAMETERS
OF INPUT–OUTPUT SYSTEMS
Input–output modeling is central to many scientific activities.
maxim of parsimony(Occam’s Razor)
engineers and other practitioners often use a so-called maxim of parsimony, also known as
Occam’s Razor, to help design mathematical models.
This maxim, as paraphrased by Albert Einstein, says to keep models as simple as possible and
no simpler.4 If a model is too simple, it will not be sufficient to explain the process that is being
modeled. For example, it’s not possible to make a mathematical model of a falling object with
high fidelity using only a linear function.As we know from physics class, this requires a quadratic
function.
On the other hand,
mathematical models can be made so complex that they can “explain” all of the available
historical data, but what they really do in those cases is fit the noise in the data—perfectly—and
thus these overly complex models tend to predict poorly because future data are by definition
uncorrelated to the noise in past data.
These aspects of mathematical modeling are very important generally, and quite pertinent in the
application of evolutionary algorithms for predictive modeling.
Evolving ARMA(X) Models
Neural networks can provide a convenient representation for non linear functions .
They can be used for prediction and classification problems.
The evolutionary approach includes initializing some n number of parent neural networks, each
initialized with uniformly randomized weights and biases from [2, 2]. Offspring were created from
the parents by adding a Gaussian random variable with zero mean and variance equal to the
mean squared error of the parent to every weight in the parent network.The process
generated one offspring from each parent and was iterated for a specific number of generations.
EVOLVING CLUSTERS
Clustering is the activity of recognizing similarities between observed data and then associating
different data together as a group.
Also important is recognizing differences between observed clusters, so that the within-group
variance is much smaller than the between-group variance.
Evolutionary clustering techniques aim to address the issue where data clusters evolve or
change over time. The central idea is that data patterns and relationships are not static, and
therefore clustering solutions should reflect this evolving nature.
● Time-series data: where the relationships between data points change over time.
● Streaming data :where new data points are continuously arriving and old data may
become less relevant.
● Real-world systems where entities or behaviors change dynamically (e.g., user
preferences, stock prices, social networks).
a population of 50 parents was created in which each solution encoded a complete clustering: a
set of hyperboxes (in this case, rectangles),represented as a five-tuple:
Evolving Control Systems
Cart–Pole Systems
The Cart-Pole System is a classic problem in control theory and reinforcement learning. It
involves a pole attached to a cart, where the cart moves along a track, and the goal is to
balance the pole in an upright position by applying forces to the cart. This system is a
benchmark for studying the control of nonlinear and unstable systems.
The following figure shows a single pole atop a cart that is placed on a track. The goal in this
problem is to push and pull on the cart so as to keep the pole from falling over (or
falling beyond a threshold number of degrees) while simultaneously keeping the cart
from hitting either end of the track.
The single-pole system is controllable with a linear system of the position and velocity of the cart
and the angle and angular velocity of the pole This is given by
Multiple Poles in Cart-pole system
Truck Backer–Upper
The Truck Backer-Upper Problem is a classic problem in robotics and control systems that
involves guiding a truck and its attached trailer(s) to a specific target position and orientation,
typically in reverse, while avoiding obstacles and adhering to physical constraints. This problem
is widely studied because it models real-world challenges in motion planning, control, and
optimization.
System Components
1. Truck: A vehicle capable of forward and backward motion with steering controls.
2. Trailer: A rigid body attached to the truck via a hinge, which pivots based on the truck's
movement.
3. Environment: A 2D plane with:
○ A starting position and orientation for the truck and trailer.
○ A target position and orientation for the truck and trailer.
○ Obstacles or boundaries (optional).
EVOLUTIONARY GAMES
Games are characterized by rules that govern the behavior of one or more players.Behavior in
games is a matter of stimulus (response), for a given state.
This aspect can be applied to many real life situations.
games are very general and life itself can be viewed as a game, with each person having a
complex time-varying objective function that describes something fuzzy, like overall
happiness/success.
Intelligence itself has been described in terms of games: the ability to make appropriate
decisions in order to adapt behavior to meet specific goals in a range of environments.
Evolution in these cases is often conducted in a format called coevolution. Instead of evolving
solutions against a fixed objective function, solutions are judged in terms of how well they
compete against other solutions in the same or a different population. Thus, the objectives may
change as the players themselves change.
Thinking about the game logically,A player may think it the following way.
If the other player cooperates, you will do better if you defect. If the other player defects, you
again will do better if you defect. Therefore, you should defect. Of course, the other player
would have the same thoughts and thus both of you would choose to defect and get only 1 point
each, when you each could have cooperated and received 3 points each. Research has shown
that when this game is iterated over many plays between two players, the propensity for mutual
defection is often reduced. Thus, the iterated prisoner’s dilemma (IPD) is fundamentally different
from the one-shot prisoner’s dilemma.
Within evolutionary game theory, the analysis of the prisoner’s dilemma is as an iterative game
with the repetitive nature affording competitors the possibility of retaliating or defecting based on
the results of previous rounds of the game.
UNIT 5
collective intelligence is a kind of wisdom and knowledge that grows out of a group.
The concept of collective intelligence states that when people/species work together, they form
a type of intelligence that simply cannot exist on the individual level.
Collective intelligence is therefore shared or group intelligence that emerges from the
collaboration, collective efforts, and competition of many individuals and appears in consensus
decision making.
❖ They each have a velocity vector, denoted by vi (t), where vi is the velocity of the ith
particle at a particular time t.
❖ their velocities will change as a function of what is known about other particles in the
collective, and also what a particle remembers about where it has been.
❖ Each particle is given a memory. It remembers the location that has yielded the best
result from the objective function. Each particle also has knowledge about the results of
other particles in a neighborhood
❖ Each particle knows the location of the particle in its neighborhood that has the best
result from the objective function.
Note: In PSO the neighborhood can be small, like a particle and its two closest neighbors, or it
can expand to be the entire collection of particles.
➢ That information is used to change the velocities of the particles and thereby having
them move to different locations, searching for a better location.
➢ Each particle’s new velocity is a function of (i) its current velocity, (ii) the vector
that points to the particle’s own best location, and (iii) the vector that points to the best
location of the particles in its neighborhood.
➢ The vectors that point to known best locations are weighted by random variables.
Generally a limit will be placed on each dimension.and there will be 10-50 particles but the
appropriate size is problem dependent, and the choice of how to construct a neighborhood for
each particle is also problem dependent.
Differential Evolution
Differential evolution, searches a landscape for optima by using different vectors between
existing solutions.
-> Differential evolution is an evolutionary algorithm which is a new heuristic approach for
minimizing possibly nonlinear and non-differentiable continuous space functions .
-> population-based search algorithm that relies on updating the location of the
individuals in the population.
->It was introduced in Storn and Price
The key ingredient in differential evolution is that individuals move based on the
differential vectors from the individual to other individuals in the population.
The population size needs to be at least four individuals
and as with all evolutionary or related methods, the population is initialized at random or based
on some prior knowledge of where to search for a solution to the problem posed
Introduction
ACO is inspired by the foraging behavior of ant colonies, and targets discrete optimization
problems.
Real ants are capable of finding the shortest path from a food source to their colony without
using visual cues.
Also, they are capable of adapting to changes in the environment, for example finding a new
shortest path once the old one is no longer feasible due to a new obstacle
The primary means for ants to form and maintain the line is a pheromone trail. Ants deposit a
certain amount of pheromone while walking, and each ant probabilistically prefers to follow a
direction rich in pheromone. This elementary behavior of real ants can be used to explain how
they can find the shortest path that reconnects a broken line after the sudden appearance of an
unexpected obstacle has interrupted the initial path
The main idea is that the self-organizing principles which allow the highly coordinated behavior
of real ants can be exploited to coordinate populations of artificial
agents that collaborate to solve computational problems.
For example, a foraging ant deposits a chemical on the ground which increases the probability
that other ants will follow the same path.
stigmergy, a form of indirect communication mediated by modifications of the environment is
used by ants (and many other insects)
The pheromone evaporates over time, so the algorithm gradually shifts the search towards the
most promising regions of the solution space.
The traveling salesman problem (TSP) is the problem of finding a shortest closed tour
which visits all the cities in a given set exactly once.
These are three ideas from natural ant behavior that we have transferred to the artificial ant
colony: (i) the preference for paths with a high pheromone level, (ii) the higher rate of growth
of the amount of pheromone on shorter paths, and (iii) the trail mediated communication among
Ants.
1.an artificial ant is an agent which moves from city to city on a TSP graph. It
chooses the city to move to using a probabilistic function both of trail accumulated on edges
and of a heuristic value,which was chosen here to be a function of the edges length.
2.Artificial ants probabilistically prefer cities that are connected by edges with a lot of
pheromone trail and which are close-by.
3.Initially, m artificial ants are placed in randomly selected cities.
4.At each time step they move to new cities and modify the pheromone trail on the edges used
–this is termed local trail updating.
5.When all the ants have completed a tour the ant that made the shortest tour modifies the
edges belonging to its tour –termed global trail updating–
by adding an amount of pheromone trail that is inversely proportional to the tour length.
There are many different ways to translate the above principles into a computational system
apt to solve the TSP.
Here is an algorithm.
For the first ant to start it's trek.The probability of it visiting any of the other available city is given
by
Ant colony optimization has been applied to a diverse set of engineering problems.
For example, the method has been employed to construct neural network topologies
fuzzy controllers,decision trees and fingerprint analysis.
EVOLVABLE HARDWARE
Evolvable hardware (EH) is a field focusing on the use of evolutionary algorithms (EA) to create
specialized electronics without manual engineering. It brings together reconfigurable hardware,
evolutionary computation, fault tolerance and autonomous systems.
Interactive evolutionary computation (IEC) or aesthetic selection is a general term for methods
of evolutionary computation that use human evaluation. Usually human evaluation is necessary
when the form of fitness function is not known (for example, visual appeal or attractiveness]) or
the result of optimization should fit a particular user preference (for example, taste of coffee or
color set of the user interface).
One of the first practical applications of interactive evolutionary computation was in the area of
police sketch artistry where someone can say whether or not a given face looks more or less
like the suspect he/she has in mind and the algorithm can adjust that face until it comes close to
the person’s mental image.
Interactive evolutionary algorithms have been used in designing ergonomic
systems, portions of video games personal hearing aids and even finding a nice blend of coffee
Involve more than one objective function that are to be minimized or maximized(usually
conflicting)
Answer is a set of solutions that define the best tradeoff between competing objectives.
Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios.
Evolutionary algorithms possess several characteristics that are desirable for this type of
problem.
● In the single-objective optimization problem,the superiority of a solution over other
solutions are easily determined by comparing their objective function values.
One approach to handling multiple criteria is to combine them in a single utility
function that returns a real value (which is not possible for some problems)
The entire set of these solutions is called the Pareto set.This is the set of solutions
for which, for any solution in the set, no objective can be improved without reducing
performance with respect to a different objective.
an algorithm that gives a large number of alternative solutions lying on or near the
Pareto-optimal front is of great practical value.
Evolutionary algorithms have been applied to this problem for many years .