KEMBAR78
482 LectureNotes Chapter 2 | PDF | Mathematical Optimization | Function (Mathematics)
0% found this document useful (0 votes)
12 views23 pages

482 LectureNotes Chapter 2

Chapter 2 discusses various optimization techniques used by engineers to enhance product design and reduce production costs. It covers the classification of optimization problems based on constraints, design variables, physical structure, and the nature of equations involved, along with classical optimization techniques for single and multi-variable functions. Additionally, it introduces genetic algorithms as a method for evolving solutions based on a fitness function.

Uploaded by

Anitha Sakthivel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views23 pages

482 LectureNotes Chapter 2

Chapter 2 discusses various optimization techniques used by engineers to enhance product design and reduce production costs. It covers the classification of optimization problems based on constraints, design variables, physical structure, and the nature of equations involved, along with classical optimization techniques for single and multi-variable functions. Additionally, it introduces genetic algorithms as a method for evolving solutions based on a fitness function.

Uploaded by

Anitha Sakthivel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 2

Optimization Techniques – Part I


The ever-increasing demand on engineers to lower production costs to withstand
global competition has prompted engineers to look for rigorous methods of decision
making, such as optimization methods, to design and produce products and systems both
economically and efficiently.
Optimization methods, coupled with modern tools of computer-aided design, are
also being used to enhance the creative process of conceptual and detailed design of
engineering systems. Most engineering design problems can be solved using nonlinear
programming techniques, there are a variety of engineering applications for which other
optimization methods, such as linear, geometric, dynamic, integer, and stochastic
programming techniques, are most suitable. Some of the recently developed methods of
optimization are genetic algorithms, simulated annealing, particle swarm optimization, ant
colony optimization, neural-network-based methods, and fuzzy optimization.

INTRODUCTION TO OPTIMIZATION

Optimization is the act of obtaining the best result under given circumstances. In design,
construction, and maintenance of any engineering system, engineers have to take many
technological and managerial decisions at several stages. The ultimate goal of all such
decisions is either to minimize the effort required or to maximize the desired benefit. Since
the effort required or the benefit desired in any practical situation can be expressed as a
function of certain decision variables, optimization can be defined as the process of finding
the conditions that give the maximum or minimum value of a function. If a point x*
corresponds to the minimum value of function f(x), the same point also corresponds to the
maximum value of the negative of the function, −f(x).

CLASSIFICATION OF OPTIMIZATION PROBLEMS


Classification Based on the Existence of Constraints
Any optimization problem can be classified as constrained or unconstrained, depending on
whether constraints exist in the problem.
Classification Based on the Nature of the Design Variables
Based on the nature of design variables encountered, optimization problems can be
classified into two broad categories. In the first category, the problem is to find values to a
set of design parameters that make some prescribed function of these parameters
minimum subject to certain constraints. Such problems are called parameter or static
optimization problems.
In the second category of problems, the objective is to find a set of design parameters,
which are all continuous functions of some other parameter that minimizes an objective
function subject to a set of constraints. This type of problem, where each design variable is
a function of one or more parameters, is known as a trajectory or dynamic optimization
problem.

Classification Based on the Physical Structure of the Problem


Depending on the physical structure of the problem, optimization problems can be
classified as optimal control and non optimal control problems. An optimal control (OC)
problem is a mathematical programming problem involving a number of stages, where each
stage evolves from the preceding stage in a prescribed manner. It is usually described by
two types of variables: the control (design) and the state variables. The control variables
define the system and govern the evolution of the system from one stage to the next, and
the state variables describe the behavior or status of the system in any stage. The problem
is to find a set of control or design variables such that the total objective function (also
known as the performance index, PI) over all the stages is minimized subject to a set of
constraints on the control and state variables.
Classification Based on the Nature of the Equations Involved
According to this classification, optimization problems can be classified as linear, nonlinear,
geometric, and quadratic programming problems. This classification is extremely useful
from the computational point of view since there are many special methods available for
the efficient solution of a particular class of problems. Thus the first task of a designer
would be to investigate the class of problem encountered.

Nonlinear Programming Problem: If any of the functions among the objective and
constraint functions is nonlinear, the problem is called a nonlinear programming (NLP)
problem. This is the most general programming problem and all other problems can be
considered as special cases of the NLP problem.

Linear Programming Problem: If the objective function and all the constraints are linear
functions of the design variables, the mathematical programming problem is called a linear
programming (LP) problem. A linear programming problem is often stated in the following
standard form:
Geometric Programming Problem:
Definition A function h(X) is called a posynomial, if h can be expressed as the sum of power
terms each of the form

Quadratic Programming Problem: A quadratic programming problem is a nonlinear


programming problem with a quadratic objective function and linear constraints. It is
usually formulated as follows:
CLASSICAL OPTIMIZATION TECHNIQUES

Single-Variable Optimization

A function of one variable f (x) is said to have a relative or local minimum at x = x* if f (x*) ≤
f (x* + h) for all sufficiently small positive and negative values of h. Similarly, a point x∗ is
called a relative or local maximum if f (x∗) ≥ f (x* + h) for all values of h sufficiently close to
zero. A function f (x) is said to have a global or absolute minimum at x∗ if f (x*) ≤ f (x) for all
x, and not just for all x close to x∗, in the domain over which f (x) is defined. Similarly, a
point x* will be a global maximum of f (x) if f (x*) ≥ f (x) for all x in the domain. The
following figure shows the difference between the local and global optimum points.
A single-variable optimization problem is one in which the value of x = x* is to be found in
the interval [a, b] such that x* minimizes f (x). The following two theorems provide the
necessary and sufficient conditions for the relative minimum of a function of a single
variable. If a function f (x) is defined in the interval a ≤ x ≤ b and has a relative minimum at
x = x*, where a < x* < b, and if the derivative df(x)/dx = f ’(x) exists as a finite number at x =
x*, then f′(x*) = 0.

Relative and Global Maxima


Optimization Problems with Functions of Two Variables:

Suppose a manufacturer produces two VCR models, the deluxe and the standard, and that
the total cost of producing x units of the deluxe and y units of the standard is given by the
function C(x, y). How would you find the level of production x a and y b that results in
minimal cost? Or perhaps the output of a certain production process is given by Q(K, L),
where K and L measure capital and labor expenditure, respectively. What levels of
expenditure K0 and L0 result in maximum output?

We learned how to use the derivative f(x) to find the largest and smallest values of a
function of a single variable f(x), and the goal of this section is to extend those methods to
functions of two variables f(x, y).

The function f(x, y) is said to have a relative maximum at the point P(a, b) in the domain of f
if f(a, b) ≥ f(x, y) for all points sufficiently close to P. Likewise, f(x, y) has a relative
minimum at Q(c, d) if f(c, d) ≤ f(x, y) for all points (x, y) sufficiently close to Q.
The points (a, b) in the domain of f(x, y) for which both fx(a, b) 0 and fy(a, b) 0 are said to
be critical points of f. Like the critical points for functions of one variable, these critical
points play an important role in the study of relative maxima and minima.

Problem 1: You decide to build a box that has the shape of a rectangular prism with a
volume of 1000 cubic centimeters. Find the dimensions x, y and z of the box so that
the total surface area of all 6 faces of the box is minimum.
Solution to Problem 1:
The total area A of all six faces of the prism is given by.
A = 2xy + 2yz + 2zx

The volume of the box is given; hence


xyz = 1000
Solve the above for Z.
z = 1000 / (xy)
Substitute z in the expression of the area A to obtain.
A(x,y) = 2xy + 2y * 1000 / (xy) + 2x * 1000 / (xy) = 2xy + 2000 / x + 2000 / y

We now need to find x and y that minimize the area A. We first need to find the critical
points and then test the second partial derivatives. The first order partial derivatives of A
are given by
Ax(x,y) = 2y - 2000 / (x2)
Ay(x,y) = 2x - 2000 / (y2)
The critical points are found by setting Ax(x,y) = 0 and Ay(x,y) = 0 and solving the system
obtained. Which gives
2y - 2000 / (x2) = 0 which gives 2yx2 = 2000
2x - 2000 / (y2) = 0 which gives 2xy2 = 2000
Solve the above to obtain
x = 10 and y = 10
We now need to find the second order partial derivatives.
Axx(x,y) = 4000/(x3)
Ayy(x,y) = 4000/(y3)
Axy(x,y) = 2
We now need to test the values of Axx, Ayy and Axy at the point (10,10) in order to use the
theorem on minima and maxima of functions with 2 variables.
D = Axx(10,10) Ayy(10,10) - Axy2(10,10) = 4 * 4 - 4 = 12
D is positive and Axx(10,10) = 4 is positive and therefore the area A is minimum for
x = 10 cm
y = 10 cm
z = 1000/(xy) = 10 cm.
Problem 2: Find the dimensions of a six-faced box that has the shape of a rectangular
prism with the largest possible volume that you can make with 12 squared meters of
cardboard.
Solution to Problem 2:
Using all available cardboard to make the box, the total area A of all six faces of the prism is
given by.
A = 2xy + 2yz + 2zx = 12
The volume V of the box is given by
V = xyz
Solve the equation 2xy + 2yz + 2zx = 12 for z
z = (6 - xy) / (x + y)
Substitute z in the expression of the volume V to obtain.
V(x,y) = xy (6 - xy) / (x + y)
Let us find the critical points by first finding the first order partial derivatives
Vx(x,y) = -y2( x2 + 2xy - 6 ) / (x + y) 2
Vy(x,y) = -x2( y2 + 2xy - 6 ) / (x + y) 2
We now solve the system of equations given by Vx = 0 and Vy = 0. One obvious solution is
given by the point (0,0) but is not physically possible. Other solutions are found by setting
x2 + 2xy - 6 = 0
and
y2 + 2xy - 6 = 0
Subtracting the equations term by term we obtain
x2 - y2 = 0
Solve to obtain
x = y and x = - y
The solution x = -y is not valid for this problem since both x and y are dimensions and
cannot be negative. Use x = y in the equation x2 + 2xy - 6 = 0, we obtain
x2 + 2x2 - 6 = 0
Solve for x
x = sqrt(2)
Find y
y = x = sqrt(2)
Let us now find the second order partial derivatives
Vxx(x,y) = -2y2( y2 + 6 ) / (x + y) 3
Vyy(x,y) = -2x2( x2 + 6 ) / (x + y) 3
Vxy(x,y) = -2xy(x2 + 3xy + y2 - 6 ) / (x + y) 3
We now need the values of Axx, Ayy and Axy to find the value of D = Vxx(sqrt(2),sqrt(2))
Vyy(sqrt(2),sqrt(2)) - Vxy2(sqrt(2),sqrt(2)) in order to use the theorem on minima and
maxima of functions with 2 variables.
D = Vxx(sqrt(2),sqrt(2)) Vyy(sqrt(2),sqrt(2)) - Vxy2(sqrt(2),sqrt(2)) = 5/2
D is positive and Vxx(sqrt(2),sqrt(2)) = -sqrt(2) is negative and therefore the volume V is
maximum for
x = sqrt(2) meters
y = sqrt(2) meters
z = (6- xy) / (x + y) = sqrt(2) meters.
Problem 3: Find the distance from the point (1,2,-1) to the plane given by the
equation x - y + z = 3.
Solution to Problem 3:
One way to find the distance from a point to a plane is to take a point (x,y,z) on the plane;
find the distance between this point and the given point and minimize it. Because the
distance involves the square root, it would be better to minimize the square of the distance.
Let the square of the distance between the given point and the point (x,y,z) on the plane be
f.
f(x,y,z) = (x - 1)2 + (y - 2)2 + (z + 1)2
We now solve the equation of the plane for z to obtain
z=3-x+y
Substitute z in f by 3 - x + y in f.
F(x,y) = (x - 1)2 + (y - 2)2 + (- x + y + 4)2
We now find the first order partial derivatives
Fx(x,y) = 2(x - 1) + 2(-1)(-x + y + 4)
Fy(x,y) = 2(y - 2) + 2(-x + y + 4)
We now need to find the critical points by setting the first partial derivatives equal to zero.
2(x - 1) + 2(-1)(-x + y + 4) = 0 and 2(y - 2) + 2(-x + y + 4) = 0
We now solve the system of equations to obtain
(8/3 , 1/3 , 2/3)
We now calculate the second order derivatives
Fxx(x,y) = 4
Fyy(x,y) = 4
Fxy(x,y) = -2
We now need to find the sign of D = Fxx(8/3,1/3) Fyy(8/3,1/3) - Fxy2(8/3,1/3) in order to
use the theorem on minima and maxima of functions with 2 variables
D = 4 * 4 - (-2)2 = 12
Since D is positive and Fxx is positive, F has a minimum at the point (8/3,1/3) which
correspond to a point on the plane given by
(8/3,-1/3,2/3)
The distance d between the given point and the plane is given by
d = sqrt [ (1 - 8/3)2 + (2 - 1/3)2 + (-1 - 2/3)2 ]
= 5 / sqrt(3)
GENETIC ALGORITHMS:

GENETIC-ALGORITHM starts with a set of one or more individuals and applies


selection and reproduction operators to "evolve" an individual that is successful, as
measured by a fitness function. The fitness function depends on the problem, but in any
case, it is a function that takes an individual as input and returns a real number as output..
Since the evolutionary process learns an agent function based on occasional rewards as
supplied by the selection function, it can be seen as a form of reinforcement learning.
GENETIC-ALGORITHM simply searches directly in the space of individuals, with the goal of
finding one that maximizes the fitness function.
In the "classic" genetic algorithm approach, an individual is represented as a string
over a finite alphabet. Each element of the string is called a gene. In real DNA, the alphabet
is AGTC (adenine, guanine, thymine, cytosine), but in genetic algorithms, we usually use the
binary alphabet (0,1). Some authors reserve the term "genetic algorithm" for cases where
the representation is a bit string, and use the term evolutionary programming when the
representation is more complicated. Other authors make no distinction, or make a slightly
different one. The selection strategy is usually randomized, with the probability of selection
proportional to fitness. That is, if individual X scores twice as high as Y on the fitness
function, then X is twice more likely to be selected for reproduction than is Y. Usually,
selection is done with replacement, so that a very fit individual will get to reproduce
several times.

The genetic algorithm finds a fit individual using simulated evolution:

function GENETIC-ALGOKITHM( population, FITNESS-FN) returns an individual


inputs: population, a set of individuals
FiTNESS-FN, a function that measures the fitness of an individual
repeat
parents <— SELECTION population, FiTNESS-FN)
population <— REPRODUCTION( parents)
until some individual is fit enough
return the best individual in population, according to FITNESS-FN

Reproduction is accomplished by cross-over and mutation. First, all the individuals


that have been selected for reproductions are randomly paired. Then for each pair, a cross-
over point is randomly chosen. Think of the genes of each parent as being numbered from 1
to N. The cross-over point is a number in that range; let us say it is 10. That means that one
offspring will get genes 1 through 10 from the first parent, and the rest from the second
parent. The second offspring will get genes 1 through 10 from the second parent, and the
rest from the first. However, each gene can be altered by random mutation to a different
value, with small independent probability.

Genetic algorithms are easy to apply to a wide range of problems. The results can be very
good on some problems, and rather poor on others.
The Algorithm
Step 1. Determine the number of chromosomes, generation, and mutation rate and
crossover rate value
Step 2. Generate chromosome-chromosome number of the population, and the
initialization value of the genes chromosome-chromosome with a random value
Step 3. Process steps 4-7 until the number of generations is met
Step 4. Evaluation of fitness value of chromosomes by calculating objective function
Step 5. Chromosomes selection
Step 6. Crossover
Step 7. Mutation
Step 8. New Chromosomes (Offspring)
Step 9. Solution (Best Chromosomes)

Numerical Example:
Using genetic algorithms to solve the problem of combination:
Suppose there is equality a +2 b +3 c +4 d = 30, genetic algorithm will be used to
find the value of a, b, c, and d that satisfy the above equation. First we should formulate the
objective function, for this problem the objective is minimizing the value of function f(x)
where f (x) = ((a + 2b + 3c + 4d) - 30). Since there are four variables in the equation, namely
a, b, c, and d, we can compose the chromosome as follow:

To speed up the computation, we can restrict that the values of variables a, b, c, and
d are integers between 0 and 30.

= Abs(76 - 30)
= 46

= 0.0845
= 46
OPTIMIZATION OF FUZZY SYSTEMS:

Fuzzy models proved to be a powerful tool for system representation for two main
reasons - they are universal approximators and they provide linguistic interpretation of the
data in the form of IF ... THEN rules. An important issue that relates to a wide range of
practical applications is the problem of optimization of systems that are described by fuzzy
models. Keeping in mind that the fuzzy model essentially is a nonlinear function, the
problem of optimization of fuzzy systems, i.e. systems that are represented by fuzzy
models, naturally transfers to the classical problem of minimization / maximization of
nonlinear functions.
In many real-world problems, the design data, objective function, and constraints
are stated in vague and linguistic terms. For example, the statement, “This beam carries a
load of 1000 lb with a probability of 0.8” is imprecise because of randomness in the
material properties of the beam. On the other hand, the statement, “This beam carries a
large load” is imprecise because of the fuzzy meaning of “large load.” Similarly, in the
optimum design of a machine component, the induced stress (σ) is constrained by an upper
bound value (σmax) as σ ≤ σmax. If σmax = 30,000 psi, it implies that a design with σ = 30,
000 psi is acceptable whereas a design with σ = 30, 001 psi is not acceptable. However,
there is no substantive difference between designs with σ = 30, 000 psi and σ = 30, 001 psi.
It appears that it is more reasonable to have a transition stage from absolute permission to
absolute impermission. This implies that the constraint is to be stated in fuzzy terms. Fuzzy
theories can be used to model and design systems involving vague and imprecise
information.

Example Problem:

A farming company wanted to decide on the size and number of pumps required for
lift irrigation. Four differently sized pumps (x1 through x4) were considered. The objective
was to minimize cost and the constraints were to supply water to all fields (who have a
strong seasonally fluctuating demand). That meant certain quantities had to be supplied
(quantity constraint) and a minimum number of fields per day had to be supplied (routing
constraint). For other reasons, it was required that at least 6 of the smallest pumps should
be included. The management wanted to use quantitative analysis and agreed to the
following suggested linear programming approach. The available budget is Rs. 42 lakhs.
The optimization problem is Minimize
41,400 x1 + 44,300 x2 + 48,100 x3 + 49,100 x4
Subject to
0.84 x1 + 1.44 x2 + 2.16 x3 + 2.4 x4 ≥ 170
16x1 + 16 x2 + 16 x3 + 16 x4 ≥ 1300
x1 ≥ 6
x2, x3, x4 ≥ 0

The solution of this problem using classical LP is


Min Cost = Rs. 38,64,975
x1= 6, x2 = 16.29, x3= 0, x4 = 58.96

Fuzzy LP
As the demand forecasts had been used to formulate the constraints, there was a
danger of not being able to meet higher demands. It is safe to stay below the available
budget of Rs. 42 lakhs. Therefore, bounds and spread of the tolerance interval are fixed as
follows:
Bounds: d1 = 37,00,000; d2 = 170; d3 = 1,300; d4 = 6
Spreads: p1=5,00,000; p2=10; p3=100; p4=6

Objective function in the classical LP problem is transformed as a constraint


41,400 x1 + 44,300 x2 + 48,100 x3 + 49,100 x4 + 42, 00,000

The optimization problem constraints are


Maximize ℷ Subject to
0.083 x1 + 0.089 x2 + 0.096 x3 + 0.098 x4 +ℷ ≤ 8.4
0.084 x1 + 0.144 x2 + 0.216 x3 + 0.240 x4 - ℷ ≥ 17
0.16 x1 + 0.16 x2 + 0.16 x3 + 0.16 x4 -ℷ ≥ 13
0.167 x1 - ℷ ≥ 1
ℷ, x2, x3, x4 ≥ 0
Solutions obtained using classical and fuzzy LP is given below:

Through Fuzzy LP, a "leeway" has been provided with respect to all constraints and
at additional cost of 3.2%. The decision maker is not forced into a precise formulation
because of mathematical reasons even though he/she might only be able or willing to
describe his/her problem in fuzzy terms.

You might also like