100%(1)100% found this document useful (1 vote) 273 views12 pagesMulti Unconstrained Optimisation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Analytical Multidimensional
(Multivariable) Unconstrained
Optimization
Learning Objectives
After studying this chapter, you should be able to:
1. Describe the method of solving problems of multivariable unconstrained optimization
2. Classify the multivariable optimization problems.
3. Explain the necessary and sufficient conditions for solving multivariable unconstrained
optimization problems.
4, Solve unconstrained multivariable functions.
In Chapter 5, we formulated the single variable optimization problems without constraints. Now
let us extend those concepts to solve multivariable optimization problems without constraints.
The optimization of such problems is routed in more than one direction. For instance, if there
are two variables in the objective function, we call such problems two-dimensional as we have to
search from two directions. Similarly, a three-variable problem can be called three-dimensional,
and so on. However, the method of solving is similar in all such cases, and, therefore, all thes?
problems can be named multidimensional ot multivariable unconstrained optimization problems
Recollect that we made use of complete differentiation to solve single variable unconstrai
optimization problems. Here we use the concept of partial differentiation in solving multivariable
unconstrained optimization problems using analytical methods.
6.1 Classification of Multivariable Optimization Problems
The multivariable optimization problems can be classified broadly into two catego,
multivariable unconstrained optimization, i.e. without constraints, and multivariable const@i™
optimization, i.e. with constraints. Further, since we categorize the constraints into 1¥°
=
114Analytical Multidimensional (Multivariable) Uncomstrained Optimization 145
such as equality and inequality type we can further classify the multivariable constrained
optimization problems into two types. Summarily, the classification of multivariable optimization
problems is shown in Figure 6.1.
Multivariable optimization problerns
Multivariable Multivariable
constrained optimization,
i.e, without constraints i.e. with constraints
Equality constraints Inequality constraints
Figure 6.1 Classification of multivariable optimization problems.
In this chapter, we discuss the analytical solution methods for multivariable unconstrained
optimization.
6.2 Optimization Techniques to Solve Unconstrained Multivariable
Functions
Let us now discuss the necessary and sufficient conditions for optimization of an unconstrained
(without constraints) multivariable function for which we make use of Taylor’s jes expansion
of a multivariable function. Taylor’s theorem/series expansion is given in Exhibit 6.1.
Exhibit 6.1
Taylor’s Theorem/Series Expansion
Taylor’s Series
In the Taylor’s formula with remainder (Eqs. (i) and (ii), if the remainder R,(x) > 0 as n
>, then we obtain
as 7 ine
f0)= f+ 2 a2 HP par EF faye Wi
[Which is called the Taylor’s series. When a = 0, we obtain the Maclaurin’s series
2
7 2
q as is assumed that f(x) has continuous derivatives up to (n + S)th order, /” * M(x) is
in the interval (a, x), Hence, to establish that lim | R, (2) |= 0, it is sufficient to show
nm
that fim 2— at
= Geli =0 for any fixed numbers x and a, Now, for any fixed numbers x and a,
Sa) = f0)+=7'O) + L140) fee (i)eo7-
Analytical Multidimensional (Multivariable) Unconstrained Optimization 117
6.3 Necessary Condition (Theorem 6.1)
If fx) has an extreme point (maximum or minimum) at x = x” and if the first derivative of
fox) exists at x°, then
He’) _ He’) _ _ Ao"
Ox, Ox, ox,64 Sufficient Condition (Theorem 6.2)
‘Acsufficient condition for a stationary point x” to be an extreme point is that the matrix of
second partial derivatives (Hessian matrix) of f(x) evaluated at x" is:
(i) Positive definite when x* is a relative minimum point.
(i) Negative definite when x* is a relative maximum point.6.5 Working Rule (Algorithm) for Unconstrained Multivariable
Optimization
From Theorems 6.1 and 6.2, we can obtain the following procedure to solve the problems of
multivariable functions without constraints.
Step 1: Check the function to ensure that it belongs to the category of multivariable
unconstrained optimization problems, i.e. it should contain more than one variable,
SAY, X,, Xp... X, and no constraints.
Step 2: Find the first partial derivatives of the function wart. x1, x9, ...5 X,-step 3
step 4
step 5¢
Step 6:
Step 7:
Analytical Multidimensional (Multivariable) Unconstrained Optimization 119
Equate all the a partial derivatives (obtained from Step 2) to zero to find the values
fps 2525 yt 3
Find all the second partial derivatives of the function.
Prepare the Hessian matrix. For example, if we have two variables x,, x,, in f(x),
then the Hessian matrix is
er _#s
J ax? = ax ax
ra) ae ay
Ox Ox. Oxy
Find the values of determinants of square sub-matrices
Ji Jy» ”
2
In our example, J, --4
as
er es
= y-| Bas
af oF
axjax, ax?
Evaluate whether x;, x3 are relative maxima or minima depending on the positive or
negative nature of J,, J, and so on.
Thus, the sign of J, decides whether J is positive or negative while the sign of J,
decides definite or indefiniteness of J (Table 6.1). More clearly,
If (i) J, > 0, J,> 0, then J is positive definite and x" is a relative minimum.
(i) J, > 0, J, <0, then J is indefinite and x" is a saddle point.
(ii) J, <0, Fe > 0, then J is negative definite and x” is a relative maximum.
(iv) J, <0, z <0, then J is indefinite and x* is a saddle point.
Table 6.1 presents the sign notation of J.
Table 6.1 Sign Notation of J
negative or J, <0
(is negative)
J, positive orJ,>0 | Jis positive definite andx" | Jis negative definite and
is definite) is a relative minimum ¥ isa relative maximum
Jznegative or J,<0 | Sis indefinite and.x’ isa | Jis indefinite and x" isa
Vis indefinite)” saddle point saddle point120 Optimization Methods for Engineers
Note: If in a function of two variables f(x,, *2) the Hessian — is neither positive 7
negative definite, then the point (x1, 33) at which a//x, = 2/10 is called the saddle yy,
which means it may be a maximum or minimum with one variable when the other js fixed.”
Ilustration 6.2 Determine the extreme points as well as evaluate the following function
fo) = xB + xB + 2} + ay + 6
Solution The necessary conditions for the existence of an extreme point are
af |
ae 0 and ae 0
of Z
as = 3x? + 4x, = 23x, + 4) = 0 i
of
aes 3x? + 8x, = x,3x, + 8) = 0 (i)
From Eq. (i), we get x, = 0 or -4
And from Eq. (ii), we get x, = 0 or +
Hence, the above equations are satisfied by the following points:
(©, 0), (0, -8/3), (4/3, 0) and (-4/3, 8/3)
Now, let us find the nature of these extreme points for which we use the sufficiency
conditions, i.e. by the second order partial differentiation of f.
2
os = 6x, +4
ax
2
f
> = 6x, +
ae at 8
ef
dxjax,
The Hessian matrix of f is given by
pe 6x, +4 0
0 6x, +8
Hence,
J, = (6x, + 4]
and
ji pae eae
0 6x, +8Analytical Multidimensional (Multivariable) Unconstrained Optimization 121
The values of J, and J, and the nature of the extreme points are given in Table 6.2.
Table 6.2 Sign Notation of J in Illustration 6.2
Pointy Valueof Value of Nature of 7 Nature of x i)
I ta
(0,0) +4 +32 Positive definite Relative minimum 6
(0,-8/3) +4 ~32 Indefinite Saddle point since 418/27
Ofldx,= af/dx,
(4/3, 0) 4 32 Indefinite Saddle point 194/27
(43, -8/3) 4 +32 Negative definite Relative maximum 50/3
of
aq 728A 0 = H=0
ef
ax?
Ber 0
Therefore, the Hessian matrix J = (° 4
Thus at x, = 0, x, = 0.
0
2
Since J, is positive (+2) and J, is negative (~4), the nature of J'is indefinite and the point at
Gas "03 is a saddle point, and also the value the function at this saddle point is (0,0) = 0.
We have J, = |2| = 2 and J, = | =-4
qs
Note: At the saddle point if one of the values of x, or x, is fixed, the other may show extremity.Ilustration 6.5 Two frictionless rigid bodies R, and R, are connected by three linear elastic
springs of constants k,, k, and k, as shown in Figure 6.3. The springs are at their natural
positions when the applied force P is zero. Find the displacements x, and x, under the force
of 26 units by using the principle of minimum potential energy assuming the spring constants
fy, ky and k, as 2, 3 and 4 respectively. Examine whether the displacements correspond to the
minimum potential energy.
0Analytical Multidimensional (Multivariable) Unconstrained Optimization
127
4)
6 =42+16=58>0
Since the values of J are positive, the matrix J is positive definite and hence the values ay
x5 correspond to the minimum potential energy.
and
and
xy =4 and x3 = 7.
Illustration 6.6 The profit on a contract of road laying is estimated as 26x, + 20x, + 4x,x,
= 3xj ~ 4xj, where x, and x, denote the labour cost and the material cost respectively. Find
the values of labour cost and material cost that can maximize the profit on the contract.
Solution Given, profit Z(x,, x,) = 26x, + 20x, + dx,xy — 3x? - 4x?
Applying the necessary conditions, ie.
a a,
ax, Oxy
az M
By 26 te 6H =0 @
and Seaman -8x, =0 ()
On solving Eqs. (i) and (ii), we get x? = 9, x} = 7.
Now, let us apply the sufficiency condition to maximize the profit (Z) at (xf, x1). This is
verified by testing the negative definiteness of the Hessian matrix. The Hessian matrix of Z at
(31, 23) is calculated as
4 az 4
=-6, =-8 and
ax? ax? Ax,dx,
si were pte elon
. Hessian matrix ly
The determinants of the square sub-matrices of J are
-~6|=-6<0
48-16=32>0
Since J, is negative (i.e. -6) and J, is positive (i.e. +32) which is definite, the nature of J is
"tative definite. Hence, the values of x, and x, show a relative maximum for Z at x and x3
Tespectively,
4 =O ET128 Optimization Methods for Engineers
Hence max profit, Zax. = 20(9) + 20(7) + 4(9(7) - 309 — 4¢7
= 234 + 140 + 252 — 243 — 196 = 187 units
Summary
aaa
Multivariable optimization problems without constraints are explained in this chapter, The
optimization problems of multivariable functions without constraints make use of Taylor's
series expansion of multivariable functions and partial differentiations. The basic idea behind
the solution methods of these problems is trying to convert them nearest to the single variable
problems, i.e. the partial differentiation which is the differentiation of one variable at a time
keeping the other variables fixed. However, the mathematical concepts of matrices, calculus
and other simple algebraic principles are prerequisites to these problems. We will take up the
multivariable constrained problems in the next chapter.
Key Concepts
Multidimensional optimization: The optimization problems routed in more than one direction, ic.
having more than one variable.
Saddle point: If there is a function of two variables /(x,, x,) whose Hessian matrix is neither positive
definite nor negative definite, then the point (x;, x3) at which 9f/ax," = 9f/2x) is called the saddle point.
Hessian matrix: J, _ ,. = [0°//2xx,_,.] is the matrix of the second partial derivative.
Positive definite: If J, > 0, J,> 0, then J is positive definite and x" is a relative minimum.
Positive indefinite: If J,> 0, J,< 0, then J is indefinite and x" is a saddle point.
Negative definite: If J, < 0, J, > 0, then J is negative definite and x* is a relative maximum.
Negative indefinite: If J, < 0, J, < 0, then J is indefinite and x* is.a saddle point,