Constrained Optimization
Constrained Optimization
Constrained Optimization
3.1. Introduction
“The fundamental economic hypothesis is that human beings behave as rational and self-
interested agents in the pursuit of their objectives, and that resources are limited. Economic
behavior is therefore modeled as the solution of a constrained optimization problem”.
𝑥2 = 30 − 1⁄4 𝑥1
⇒ 𝑢 = 𝑥1 (30 − 1⁄4 𝑥1 ) = 30𝑥1 − 1⁄4 𝑥12
𝑑𝑢
FOC. (First Order Condition)𝑑𝑥 = 30 − 1⁄2 𝑥1 =0
1
⇒ 𝑥1 = 60, 𝑎𝑛𝑑, 𝑥2 = 15
𝑑2 𝑈
SOC (second order /sufficient) condition: = − 1⁄2 < 0, so utility is maximized at the
𝑑𝑥 2
equilibrium points.
Example 2: A firm faces the production function Q= 12K0.4 L0.4 and assume it can purchase
K and L at price per unit of 40 birr and 5 Birr respectively and it has a budget of 800 Birr.
Determine the amount of K and L which maximizes output.
Solution
The problem is Maximize Q= 12K0.4 L0.4
Subject to 40K+5L = 800
According to the theory of production, the optimization condition is written in such away
that the ratio of marginal product of every input to its price must be the same. That is
𝑀𝑃𝐾 𝑀𝑃𝐿
=
𝑃𝐾 𝑃𝐿
The marginal products can be obtained using the method of partial differentiation as
follows.
𝑀𝑃𝐾 = 4.8𝐾 −0.6 𝐿0.4……………………………………………………………….. (1)
𝑀𝑃𝐿 = 4.8𝐾 0.4 𝐿−0.6……………………….………………………………………. (2)
Substituting these marginal products and the given prices in the constraint function gives us
4.8𝐾 −0.6 𝐿0.4 4.8𝐾 0.4 𝐿−0.6
=
40 5
𝐾 −0.6 𝐿0.4 = 8𝐾 0.4 𝐿−0.6
Multiplying both sides by 𝐾 0.6 𝐿0.6 results in
L = 8K ……………………………………………………………………………….. (3)
Substituting (3) in the budget constraint we get
40K + 5(8K) =800
40K+ 40K = 800
80k =800
K=10
Thus, L= 8(10) =80
Therefore, this firm should employ 10 units of capital and 80 units of labor in the production
process to optimize its output.
2. The Lagrange Multiplier Method
When the constraint is a complicated function or when there are several constraints under
consideration, the substitution method could become very cumbersome. This has led to the
development of another simpler method of finding extrema of a function, the lagrangean
method. This method involves forming a lagrangean function that includes the objective
function, the constraint function and the variable, 𝜆, called the Lagrange multiplier. The
essence of this method is to convert a constrained extremum problem in to a form such that
the first order conditions of the unconstrained optimization problem can still be applied.
We may note here that the necessary condition, obtained above under the substitution
method, can also be obtained from an auxiliary function, to be termed as Lagrange Function.
This function is formed by the use of objective function and the constraints. Given the
function 𝑍 = 𝑓(𝑥, 𝑦) subject to 𝑔(𝑥, 𝑦) = 𝑃𝑋 𝑥 + 𝑃𝑌 𝑦 = 𝑀, to determine the amount of 𝑥 and
𝑦 which maximize the objective function using the Lagrange Multiplier Method, we should
involve the following steps.
Step 1 Rewrite the constraint function in its implicit form as
𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 = 0
Step 3 Add the above constraint to the objective function and thereby formulate the
Lagrange function that is a modified form of the objective function which includes the
constraints as follows:
𝐿(𝑥, 𝑦, 𝜆) = 𝑍(𝑥, 𝑦) + 𝜆(𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 )
Necessary condition, i.e. the first order condition for maximization is that the first order
partial derivatives of the Lagrange function should be equal to zero. Differentiating L with
respect to 𝑥, 𝑦 and 𝜆 and equating it with zero gives us.
𝜕𝐿 𝜕𝑍
= 𝜕𝑥 − 𝜆𝑃𝑥 = 0 ………………………………….…………………………… (4)
𝜕𝑥
𝜕𝐿 𝜕𝑍
= 𝜕𝑦 − 𝜆𝑃𝑦 = 0 ………………………………….…………………………… (5)
𝜕𝑦
𝜕𝐿
= 𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 = 0 ………………………………….…………………………… (6)
𝜕𝜆
From equation (4) and (5) we get
𝑍 𝑍𝑦
𝜆 = 𝑃𝑥 and 𝜆 = 𝑃
𝑥 𝑦
𝑍 𝑍𝑦 𝑍𝑥 𝑃
This means, 𝜆 = 𝑃𝑥 = 𝑃 or = 𝑃𝑥
𝑥 𝑦 𝑍𝑦 𝑦
The Lagrange Multiplier (λ): measures the effect of a unit change in the constant of
constraint “C” (constant of constraint) on the optimal value of the objective function. The
Lagrange multiplier approximates the marginal impact on the objective function caused by
a small change in the constant of the constraint.
Sufficient condition -To get the second order condition, we should partially differentiate
equations (4), (5) and (6). Representing the second direct partial derivatives by 𝑍𝑥𝑥 and 𝑍𝑦𝑦 ,
and the second cross partial derivatives by 𝑍𝑥𝑦 and 𝑍𝑦𝑥 , the border Hessian determinant
bordered with 0, 𝑔𝑥 and 𝑔𝑦 is
0 𝑔𝑥 𝑔𝑦 0 −𝑃𝑥 −𝑃𝑦
|𝐻̄ | = |𝑔𝑥 𝐿𝑥𝑥 𝐿𝑥𝑦 | = |−𝑃𝑥 𝑍𝑥𝑥 𝑍𝑥𝑦 | > 𝑜
𝑔𝑦 𝐿𝑦𝑥 𝐿𝑦𝑦 −𝑃𝑦 𝑍𝑦𝑥 𝑍𝑦𝑦
If all the principal minors are negative, i.e., if |𝐻̄ |2 < 0, |𝐻̄ |3 < 0, |𝐻̄ |4 < 0, the
bordered Hessian is positive definite, and a positive definite Hessian always satisfies
the sufficient condition for a relative minimum.
If the principal minors alternate consistently in sign from positive to negative, i.e., if
|𝐻̄ |2 > 0, |𝐻̄ |3 < 0, |𝐻̄ |4 > 0, etc., the bordered Hessian is negative definite, and a
negative definite Hessian always meets the sufficient condition for a relative
maximum.
i) Maximization
Example: Given the utility function of the consumer who consumes two goods x and y as
U (x, y) = (x+ 2) (y+1)
If the price of good x is 𝑃𝑥 = 4 birr, that of good y is 𝑃𝑦 = 6 Birr and the consumer has a fixed
budget of 130 birr. Determine the optimum values of 𝑥 and 𝑦 using the Lagrange multiplier
method,
Solution: Maximize U (x, y) = x y + x+ 2y + 2
Subject to 4x + 6y = 130
Now we should formulate the Lagrange function to solve this problem. That is
𝐿(𝑥, 𝑦, 𝜆) = x y + x+ 2y + 2 + 𝜆 (130 - 4x - 6y) ………………………………………… (7)
𝜕𝐿 𝜕𝐿 𝜕𝐿
Necessary conditions for utility maximization are 𝜕𝑥 = 0, 𝜕𝑦 = 0, 𝜕𝜆 = 0
𝜕𝐿
= (𝑦 + 1) − 4𝜆 = 0
𝜕𝑥
y = -1 + 4𝜆 ………………………………………………………… (8)
𝜕𝐿
= (𝑥 + 2) − 6𝜆 = 0
𝜕𝑦
𝑥 = −2 + 6𝜆 ………………………………………………………… (9)
𝜕𝐿
= 4𝑥 + 6𝑦 − 130 = 0
𝜕𝜆
4x+6y= 130 ………………………………………………………… (10)
Substituting the value of x and y explained in equation (8) and (9) in to (10) enables us to
determine
4 (−2 + 6 𝜆) + 6 (−1 + 4 𝜆) = 130
− 8 + 24 𝜆 − 6 + 24 𝜆 = 130
48 𝜆 = 144
𝜆 = 3
Therefore, x = -2+6(3)
x = -2 + 18 = 16
x=16
y = -1 + 4 (3)
y = 11
Second order sufficient condition for utility maximization is
0 𝑔𝑥 𝑔𝑦
|𝐻̄ | = |𝑔𝑥 𝐿𝑥𝑥 𝐿𝑥𝑦 |
𝑔𝑦 𝐿𝑦𝑥 𝐿𝑦𝑦
The second partial derivatives of the objective function and the first partial derivatives of the
constraint function are
𝜕2 𝐿 𝜕2 𝐿 𝜕2 𝐿 𝜕2 𝐿
𝐿𝑥𝑥 = 𝜕𝑥 2 = 0 𝐿𝑦𝑦 = 𝜕𝑦 2 = 0, 𝐿𝑥𝑦 = 𝜕𝑥𝜕𝑦 = 0 , 𝐿𝑦𝑥 = 𝜕𝑦𝜕𝑥 = 0
𝜕𝑔 𝜕𝑔
𝑔𝑥 = = 4, and 𝑔𝑦 = 𝜕𝑦 = 6
𝜕𝑥
Therefore, the bordered Hessian determinant of this function is
0 4 6
|𝐻̄ | = |4 0 1| = - 4(0-6) + 6 (4- 0) = 48 > 0
6 1 0
The second order condition, i.e., |𝐻̄ |2 > 0 which is negative definite, is satisfied for
maximization. Thus, the consumer maximizes utility when s/he consumes 11 units of good
y and 16 units of good x. The maximum utility is U = (16+2) (11+1) = (18) (12) = 216 units
which is similar to the value of the Lagrange function at these values of x, y and 𝜆 . The value
of the Lagrange multiplier 𝜆 is 3. It indicates that a one-unit increase (decrease) in the budget
of the consumer increases (decreases) his total utility by 3 units.
Example 2: Suppose the monopolist sells two products x and y and their respective demand
is Px = 100 - 2 x and Py = 80 – y. The total cost function is given as TC = 20x + 20y, when the
maximum joint product of the two outputs 60 unit. Determine the profit maximizing level of
each output and their respective price.
Solution: As it is known profit (𝜋) = TR - TC, where TR represents total revenue and TC
represents total cost.
TR= x P x + y P y = (100x - 2x2) + (80y - y2)
Thus 𝜋 = 100x - 2x2 + 80 y - y2 - 20x - 20 y
𝜋 = 80 x + 60 y – 2x2- y2
But this monopolist can maximize its profit subject to the production quota. Thus,
Maximize 𝜋 = 80x + 60 y- 2x2- y 2
Subject to x+ y = 60
To solve this problem, we should formulate the Lagrange function,
L (x, y,𝜆) = 80x + 60y - 2x2 - y 2 + 𝜆 (x+ y - 60) ……………………………………… (11)
First order conditions for maximum profit are
L𝑥 = 80 - 4x +𝜆 = 0
- 4x = - 80 -𝜆
1
𝜆 = 20 + 4 𝜆 ………………………………………………………… (12)
L𝑦 = 60 - 2y +𝜆 = 0
- 2y = - 60 -𝜆
1
y = 30 + 2 𝜆 ………………………………………………………… (13)
L𝜆 = x + y -60 = 0
x + y = 60 ………………………………………………………… (14)
Substituting equation (12) and (13) in equation (14), we get
1 1
20 + 4 𝜆 + 30 + 2 𝜆 = 60
3
50 + 4 𝜆 = 60
3
𝜆 = 10
4
40
𝜆=
3
1 40 1 (40)
Thus, x = 20 +4 ( 3 ) y = 30 + 2 3
Therefore, the monopolist maximizes its profit when it sells 23.33 of good 𝑥 at a price of
40
53.34 birr per unit and 36.67 units of good𝑦at a price of 43.33 birr per unit. 𝜆 = shows
3
that a one unit increase in total expenditure on inputs increases total profit of the monopolist
40
by units. In other words, if the constant of the constraint relaxes by one unit that is 𝑥 +
3
𝑦 = 61, then the value of the objective function increases by the value the Lagrange
multiplier.
ii) Minimization
Example 3: The firm can determine the least cost combination of inputs for the production
of a certain level of output Q. Given the production function Q= f (L, K) and the cost function
of the firm is C = LPL + KP k Where L = labor, K = capital, Q = output. Suppose the price of both
inputs to be exogenous, we can formulate the problem of minimizing the cost as
Minimizes C = PL L + P k k
Subject to Q = f (L, K)
To determine the amount of labor and capital that should be employed initially we should
formulate the Lagrange function. It is
𝐿 = 𝐿𝑃𝐿 + 𝐾𝑃𝐾 + 𝜆(𝑄 − 𝑓(𝐿, 𝐾) ………………………………………………………… (15)
First order conditions for a minimum cost are
𝐿𝐿 = 𝑃𝐿 − 𝜆𝑄𝐿 = 0
𝑃 𝑃
𝜆 = 𝑄𝐿 = 𝑀𝑃𝐿 ………………………………………………………… (16)
𝐿 𝐿
𝐿𝐾 = 𝑃𝐾 − 𝜆𝑄𝑘 = 0
𝑃 𝑃
𝜆 = 𝑄𝐾 = 𝑀𝑃𝐾 ………………………………………………………… (17)
𝐾 𝐾
Equation (19) indicates that, at the point of optimal input combination the input - price ratio
and the marginal product ratio have to be the same for each input. This ratio shows the
amount of expenditure per unit of the marginal product of the input under consideration.
Thus, the interpretation the Lagrange multiplier is the marginal cost of product at the
optimal condition. In other words, it indicates the effect of change in output on the total costs
of production, i.e., it measures the comparative static - effect of the constraint constant on
the optimal value of the objective function.
The first order condition indicated in equation (19) can be analyzed in terms of isoquants
and iso-costs as
𝑃 𝑀𝑃
𝜆 = 𝑃𝐿 = 𝑀𝑃𝐿 ………….………………………………………………… (20)
𝑘 𝑘
𝑀𝑃
The 𝑀𝑃 𝐿 represents the negative of the slope of the isoquant, which measures the marginal
𝐾
rate of technical substitution of labor to capital (MRTSLK).
𝑃
The 𝑃 𝐿 ratio shows the negative of the slope of the isocost. An isocost is a line which indicates
𝐾
the locus of input combinations which entail the same total cost. It is shown by the equation
𝐶 𝑃
C= PL L + P k K or K = 𝑃 - 𝑃𝐿 L
𝐿 𝑘
𝑃𝐿 𝑀𝑃
= 𝑀𝑃𝐿 indicates the fact that the isocost and isoquant lines are tangent to each other at the
𝑃𝑘 𝑘
point of optimal input combination.
Second order condition for minimization of cost.
A negative bordered Hessian determinant is sufficient to say the cost is at its minimum value.
That is
0 𝑄𝐿 𝑄𝐾
̄
|𝐻 | = | 𝑄𝐿 𝐿𝐿𝐿 𝐿𝐿𝐾 | < 0
𝑄𝐾 𝐿𝐾𝐿 𝐿𝐾𝐾
Example 4: Suppose a firm produces an output Q using labor L and capital K with production
function 𝑄 = 10𝐾 0.5 𝐿0.5 . If the output is restricted to 200 units, the price of labor is 10 birr
per unit and Price of capital is 40Birr per unit. Determine the amount of L and K that should
be employed at minimum cost. Find the minimum cost.
The problem is Minimize C = 10 L + 40K
Subject to 200 = 10𝐾 0.5 𝐿0.5
Formulating the Lagrange function
𝐿(𝐿, 𝐾, 𝜆) = 10𝐿 + 40𝐾 + 𝜆(200 − 10𝐾 0.5 𝐿0.5 ) ……………………………………… (21)
First order conditions
𝐿𝐿 = 10 − 5𝜆𝐾 0.5 𝐿−0.5 = 0
2𝐿0.5
𝜆= ………………..…………………………………………… (22)
𝐾0.5
2L = 8K
L= 4K ………………………………………………………… (25)
Substituting equation (25) in to (24) gives us
𝐾 0.5 (4𝐾)0.5 = 20………………………………………………………… (26)
2K = 20
K = 10 and L = 4(10) = 40, 𝜆 = 4
Second order condition
Now we should check the second order condition to verify that cost of production is least at
K = 10 and L = 40. For cost minimization the determinant of the bordered Hessian matrix
must be less than zero.
0 𝑄𝐿 𝑄𝐾
|𝐻̄ | = | 𝑄𝐿 𝐿𝐿𝐿 𝐿𝐿𝐾 | < 0
𝑄𝐾 𝐿𝐾𝐿 𝐿𝐾𝐾
At L = 40 and K = 10
𝜕𝑄 𝐾 10
QL = 𝜕𝐿 = (5)√ 𝐿 = (5)√40 = 2.5
𝜕𝑄 𝐿 40
Q k = 𝜕𝑘 = (5)√𝐾 = (5)√10 = 10
0 g1 g2 g3
0 𝑔1 𝑔2 g L11 L12 L13
|𝐻̄2 | = |𝑔1 𝐿11 𝐿12 | , H 3 = 1 etc
𝑔2 𝐿21 𝐿22 g2 L21 L22 L23
g3 L31 L32 L33
In this case we will have m+ n variables in the Lagrange function and we will have also m+ n
simultaneous equations.
First order conditions are
𝐿𝜆𝑖 = 𝑐 𝑗 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) = 0
Second order conditions for optimization of three variables and two constraints problem are
0 0 𝑔11 𝑔21 𝑔31
|0 0 𝑔12 𝑔22 𝑔32 |
̄
|𝐻 | = 𝑔11 𝑔12 𝐿11 𝐿12 𝐿13
|𝑔1 𝑔22 𝐿21 𝐿22 𝐿23 |
2
𝑔31 𝑔32 𝐿31 𝐿32 𝐿33
In this case, |𝐻̄3 | = |𝐻̄ |. Thus, for a maximum value, |𝐻̄2 | > 0, |𝐻̄3 | < 0 and for a minimum,|𝐻̄2 |
< 0, |𝐻̄3 | < 0. With the existence of n - variables and m - constraints, the second order
condition is explained as
0 0 0 . . 0 − 𝑔11 𝑔21 𝑔31 . . 𝑔𝑛1
| 0 0 0 . . 0 − 𝑔12 𝑔22 𝑔32 . . 𝑔𝑛2 |
0 0 0 . . 0 − 𝑔13 𝑔23 𝑔33 . . 𝑔𝑛3
. . . . . . − . . . . . .
| . . . . . . − . . . . . . |
0 0 0 . . 0 − 𝑔1𝑚 𝑔2𝑚 𝑔3𝑚 . . 𝑔𝑛𝑚
|𝐻̄ | = − − − − − − − − − − .− − − −.
𝑔1 𝑔21 𝑔31 . . 𝑔𝑛1 − 𝐿11 𝐿12 𝐿13 . . 𝐿1𝑛
| 12 |
𝑔1 𝑔22 𝑔32 . . 𝑔𝑛2 − 𝐿21 𝐿22 𝐿23 . . 𝐿2𝑛
𝑔13 𝑔23 𝑔33 . . 𝑔𝑛3 − 𝐿31 𝐿32 𝐿33 . . 𝐿3𝑛
| . . . . . . − . . . . . . |
. . . . . . − . . . . . .
𝑔1𝑚 𝑔2𝑚 𝑔3𝑚 . . 𝑔𝑛𝑚 − 𝐿𝑛1 𝐿𝑛2 𝐿𝑛3 . . 𝐿𝑛𝑛
Now we have divided the Bordered Hessian Determinant in to four parts. The upper left
area includes zeros only and the lower right area is simply a plain Hessian. The remaining
𝑗
two areas include the 𝑔𝑖 derivatives. These derivatives have a mirror image relationship to
each other considering the principal diagonal of the Bordered Hessian as a reference.
We can create several bordered principal minors from |𝐻̄ |. It is possible to check the second
order sufficient condition for optimization using the sign of the following bordered principal
minors:
|𝐻̄𝑚+1 |, |𝐻̄𝑚+2 |,………………….,|𝐻̄𝑛 |
The objective function can sufficiently achieve its maximum value when the successive
bordered principal minors alternate in sign. However, the sign of |𝐻̄𝑚+1 | is (-1) m+1 whereas
for minimum value the sufficient condition is that all bordered principal minors have the
same sign, i.e., (-1) m. This indicates that if we have an odd number of constraints, then sign
of all bordered principal minors will be negative and positive with even number of
constraints.
Diagram (ii) shows that the local maximum is located on the vertical axis indicated by point
𝑑𝜋
C. At this point, the choice variable is 0 and the first order derivative is zero, i.e. = 0, at
𝑑𝑥
point C we have a boundary solution.
Diagram (iii) indicates that the local maximum may locate at point D or point E with in the
𝑑𝜋
feasible region. In this case, the maximum point is characterized by the inequality <0
𝑑𝑥
because the curves are at their decreasing portion at these points.
From the above discussion it is clear that the following three conditions have to be met so as
to determine the value of the choice variable which gives the local maximum of the objective
function.
𝑓 ′ (𝑥) = 0, and x > 0 (point B)
𝑓 ′ (𝑥) = 0, and x = 0 (point C)
𝑓 ′ (𝑥) < 0, and x = 0 (point D and E)
Combining these three conditions in to one statement given us
𝑓 ′ (𝑥) ≤ 0 𝑥 ≥ 0 and 𝑥𝑓 ′ (𝑥) = 0
𝑑𝜋
The first inequality indicates the information concerning . The second inequality shows
𝑑𝑥
the non-negativity restriction of the problem. The third part indicates the product of the two
quantities 𝑥 and 𝑓 ′ (𝑥). The above statement which is a combination of the three conditions
represents the first order necessary condition for the objective function to achieve its local
maximum provided that the choice variable has to be non-negative.
If the problem involves n - choice variables like
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 , . . . 𝑥𝑛 )
Subject to 𝑥𝑖 ≥ 0
The first order condition in classical optimization process is
𝑓1 = 𝑓2 = 𝑓3 = …………………………. = 𝑓𝑛 = 0
The first order condition that should be satisfied to determine the value of the choice variable
which maximizes the objective function is
𝑓𝑖 ≤ 0 𝑥𝑖 ≥0 and 𝑥𝑖 𝑓𝑖 = 0 (i =1, 2, 3, -------, n)
Where 𝑓𝑖 is the partial derivative of the objective function with respect to 𝑥𝑖 , i.e.,
𝜕𝜋
𝑓𝑖 = 𝜕𝑥 .
𝑖
Step 2
Now we continue to the second step. To do this, let us attempt to incorporate inequality
constraints in the problem. In order to simplify our analysis, let us first discuss about
maximization problem with three choice variables and two constraints as shown below.
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 )
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) ≤ 𝑘1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) ≤ k2
And x1, x2, x3 ≥ 0
Using the dummy variables s1 and s2 we can change the above problem in to
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 )
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) + 𝑠1 = 𝑘1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) + 𝑠2 = 𝑘2
𝑥1 , 𝑥2 , 𝑥3 ≥ 0&𝑠1 , 𝑠2 ≥ 0
We can formulate the Lagrange function using the classical method provided that the non-
negativity constraints of the choice variables are not existed as
𝐿 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 ) + 𝜆1 [𝑘1 − 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) − 𝑠1 ] + 𝜆2 [𝑘2 − 𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) − 𝑠2 ]
It is possible to derive the Kuhn- Tucker conditions directly from the Lagrange function.
Considering the above 3-variable 2-constraints problem
The first order condition is
𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿
= = = 𝜕𝑠 = 𝜕𝑠 =𝜕𝜆 =𝜕𝜆 =0
𝜕𝑥1 𝜕𝑥2 𝜕𝑥3 1 2 2 1
However, 𝑥𝑗 and si variable are restricted to be non-negative. As a result, the first order
conditions on these variables ought to be modified as follows.
𝜕𝐿 𝜕𝐿
≤0 sj ≥ 0 and 𝑥𝑗 = 𝜕𝑥 = 0
𝜕𝑥𝑗 𝑗
𝜕𝐿 𝜕𝐿
≤0 𝑠𝑖 ≥ 0 and 𝑠𝑖 = 𝜕𝑠 = 0
𝜕𝑠𝑖 𝑖
𝜕𝐿
=0 Where (i = 1, 2 and j= 1, 2, 3)
𝜕𝜆𝑖
However, we can combine the last two lines and thereby avoid the dummy variables in the
𝜕𝐿
above first order condition as shown below. As 𝜕𝑠 = −𝜆𝑖 , the second line shows that
𝑖
−𝜆𝑖 ≤ 0, , 𝑠𝑖 ≥ 0 and – 𝑠𝑖 𝜆𝑖 = 0
or 𝜆𝑖 ≥ 0, 𝑠𝑖 ≥ 0 and 𝑠𝑖 𝜆𝑖 = 0
But, we know that 𝑠𝑖 = 𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 ). By substituting it in place of 𝑠𝑖 , we can get
𝜕𝐿
= 𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 ) ≥ 0 𝜆𝑖 ≥ 0 and 𝜆𝑖 [𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 )] =0
𝜕𝜆𝑖
These are the Kuhn - tucker conditions for the given maximization problem.
How can we solve minimization problem?
One of the methods to solve this problem is changing it in to maximization problem and then
applies the same procedure with maximization. Minimizing 𝐶 is similar to maximizing (−𝐶).
However, keep in mind the fact that we have to multiply each constraint inequalities by (−1).
We can directly apply the Lagrange multiplier method and determine the minimization
version of Kuhn - Tucker condition instead of converting the inequality constraints into
equality constraints using dummy variables as
𝜕𝐿 𝜕𝐿
≥0 𝑥𝑗 ≥ 0 and x j 𝜕𝑥 = 0
𝜕𝑥𝑗 𝑗
𝜕𝐿 𝜕𝐿
𝜕𝜆𝑖
≤0 𝜆𝑖≥0 and i 𝜕𝜆 = 0 (minimization)
𝑖
Example 5: Minimize C= x2+ y2
Subject to x y≥ 25
x, y ≥ 0
The Lagrange function for this problem is
𝐿 = 𝑥 2 + 𝑦 2 + 𝜆(25 − 𝑥𝑦)
It is a minimization problem. Therefore, the appropriate conditions are
𝜕𝐿 𝜕𝐿
= 2𝑥 − 𝜆𝑦 ≥ 0, 𝑥 ≥ 0 and 𝑥 𝜕𝑥= 0
𝜕𝑥
𝜕𝐿 𝜕𝐿
= 2𝑦 − 𝜆𝑥 ≥ 0, 𝑦 ≥ 0 and 𝑦 𝜕𝑦= 0
𝜕𝑦
𝜕𝐿 𝜕𝐿
= 25 − 𝑥𝑦 ≤ 0, 𝜆≥0 and 𝜆 𝜕𝜆 = 0
𝜕𝜆
Can we determine the non-negative value 𝜆 which will satisfy all the above conditions
together with the optimal solution x and y? The optimal solutions in our earlier discussion
𝜕𝐿 𝜕𝐿
are x=5 and y=5, which are nonzero. Thus, the complementary slackness ( x 𝜕𝑥 = 0, y 𝜕𝑦 = 0)
𝜕𝐿 𝜕𝐿
shows that 𝜕𝑥 = 0 and 𝜕𝑦= 0.
Thus, we can determine the value of 𝜆 by substituting the optimal values of the choice
variables in either of these marginal conditions as
𝜕𝐿
= 2x - 𝜆𝑦 = 0
𝜕𝑥
2(5) - 𝜆 (5) = 0
10 - 5𝜆 = 0
𝜆 =2>0
𝜕𝐿 𝜕𝐿 𝜕𝐿
This value 𝜆 = 2, x = 5 & y = 5 imply that = 0, = 0, = 0 which fulfils the marginal
𝜕𝑥 𝜕𝑦 𝜕𝜆
conditions and the complementary slackness conditions. In other words, all the Kuhn -
Tucker conditions are satisfied.
Example 6: Maximize 𝑍 = 10𝑥 − 𝑥 2 + 180𝑦 − 𝑦 2
Subject to 𝑥 + 𝑦 ≤ 80 and 𝑥, 𝑦 ≥ 0
Solution
First, we should formulate the Lagrange function assuming the equality constraint and
ignoring the non-negativity constraints.
𝐿 = 10𝑥 − 𝑥 2 + 180𝑦 − 𝑦 2 + 𝜆(80 − 𝑥 − 𝑦)
The first order conditions are
𝜕𝐿
= 10 − 2𝑥 − 𝜆 = 0 ⇒ 𝜆 = 10 − 2𝑥 − − − − − − − − − − − − − − − (1)
𝜕𝑥
𝜕𝐿
= 180 − 2𝑦 − 𝜆 = 0 ⇒ 𝜆 = 180 − 2𝑦 − − − − − − − − − − − − − (2)
𝜕𝑦
𝜕𝐿
= 80 − 𝑥 − 𝑦 = 0 ⇒ 𝑥 + 𝑦 = 80 − − − − − − − − − − − − − − − −(3)
𝜕𝜆
Taking equation (1) and (2) simultaneously
10 − 2𝑥 = 180 − 2𝑦
2𝑦 − 2𝑥 = 170
2𝑦 = 170 + 2𝑥
𝑦 = 85 + 𝑥 − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − (4)
If we substitute equation (4) in to (3), we get
𝑥 + 85 + 𝑥 = 80
2𝑥 = −5 ⇒ 𝑥 = −2.5
However, the value of the choice variables is restricted to be non-negative. Thus𝑥 ∗ = −2.5 is
infeasible. We must set x= 0 since it has to be non-negative. Now we can determine the value
of 𝑦 by substituting zero in place of 𝑥 in equation (3).
0 + 𝑦 = 80
𝑦 ∗ = 80
Therefore, 𝜆∗ = 180 − 2(80) = 20
The possible solutions are 𝑥 ∗ = 0, 𝑦 ∗ = 80, 𝜆∗ = 20
However, we must check the inequality constraints and the complementary slackness
conditions to decide whether these values are solutions or not.
1) Inequality constraints
i) The non-negativity restrictions are satisfied since 𝑥 = 0, 𝑦 = 80, 𝜆 = 20 ≥ 0
ii) Inequality constraints
𝑥 + 𝑦 ≤ 80
0 + 80 ≤ 80
2) Complementary Slackness conditions
𝜕𝐿 𝜕𝐿
i) 𝑥 𝜕𝑥 = 0, 𝑥 = 0 ⇒ 𝜕𝑥 < 0 as the problem is maximization.
𝜕𝐿
= −10 < 0
𝜕𝑥
𝜕𝐿 𝜕𝐿
ii) 𝑦 𝜕𝑦 = 0, 𝑦 = 80 ≠ 0 ⇒ 𝜕𝑦 = 0
𝜕𝐿
= 180 − 2(80) − 20 = 0
𝜕𝑦
𝜕𝐿 𝜕𝐿
iii) 𝜆 𝜕𝜆 = 0, 𝜆 = 20 ≠ 0 ⇒ 𝜕𝜆 = 0
𝜕𝐿
= 80 − 0 − 80 = 0
𝜕𝜆
All the Kuhn Tucker conditions are satisfied. Thus, the objective function is maximized
when𝑥 ∗ = 0, 𝑦 ∗ = 80, 𝜆 = 20.
Example 7: Given the revenue and cost conditions of a firm as 𝑅 = 32𝑥 − 𝑥 2 and𝐶 = 𝑥 2 +
8𝑥 + 4, where 𝑥 is output. Suppose the minimum profit is𝜋0 = 18. Determine the amount of
the output which maximizes revenue with the given minimum profit. In this case, the
revenue function is concave and the cost function is convex.
The Problem is
Maximize 𝑅 = 32𝑥 − 𝑥 2
Subject to 𝑥 2 + 8𝑥 + 4 − 32𝑥 + 𝑥 2 ≤ −18
And x ≥0
Under these situations the Kuhn-Tucker conditions are necessary and sufficient conditions
as all of the above three conditions, i.e., (1), (2), 4(3), are satisfied.
The Lagrange function of this problem is
𝐿 = 32𝑥 − 𝑥 2 + 𝜆(−22 − 2𝑥 2 + 24𝑥) − − − − − − − − − − − − − − − (1)
Thus,
𝜕𝐿
= 32 − 2𝑥 − 4𝜆𝑥 + 24𝜆 = 0 − − − − − − − − − − − − − − − − − −(2)
𝜕𝑥
𝜕𝐿
= −22 − 2𝑥 2 + 24𝑥 = 0 − − − − − − − − − − − − − − − − − − − −(3)
𝜕𝜆
However, we must check the inequality constraints and the complementary slackness
conditions to decide whether these values are the solutions or not
𝜕𝐿 𝜕𝐿
≤ 0, 𝑥≥0 and 𝑥 𝜕𝑥 = 0, -----------------------------(5)
𝜕𝑥
𝜕𝐿 𝜕𝐿
≥ 0, 𝜆≥0 and 𝜆 𝜕𝜆 = 0, -----------------------------(6)
𝜕𝜆
At X=1
𝜕𝐿 𝜕𝐿 −3
At this point 𝑥 > 0 this implies that𝜕𝑥 = 0. Thus, = 30 + 20𝜆 = 0 ⇒ 𝜆 = . It does not
𝜕𝑥 2
satisfy equation (6).
𝜕𝐿 𝜕𝐿 1
At X=11, 𝑥 > 0 this implies that = 0, Thus𝜕𝑥 = 10 − 20𝜆 = 0 ⇒ 𝜆 = 2. It satisfies both
𝜕𝑥
equation (5) and (6). This means, the Kuhn-Tucker conditions are fulfilled at𝑥 = 11.
Therefore, revenue is maximized when the firm sells 𝑥 = 11 units of output.