KEMBAR78
Constrained Optimization | PDF | Mathematical Optimization | Mathematical Analysis
0% found this document useful (0 votes)
125 views23 pages

Constrained Optimization

Chapter Three discusses constrained optimization in economics, emphasizing the importance of maximizing benefits and minimizing costs within resource constraints. It introduces methods for solving optimization problems with equality constraints, including the direct substitution method and the Lagrange multiplier method. The chapter provides examples illustrating these techniques in both consumer utility maximization and production function optimization.

Uploaded by

hkhamu86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
125 views23 pages

Constrained Optimization

Chapter Three discusses constrained optimization in economics, emphasizing the importance of maximizing benefits and minimizing costs within resource constraints. It introduces methods for solving optimization problems with equality constraints, including the direct substitution method and the Lagrange multiplier method. The chapter provides examples illustrating these techniques in both consumer utility maximization and production function optimization.

Uploaded by

hkhamu86
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter Three

Constrained Optimization
3.1. Introduction
“The fundamental economic hypothesis is that human beings behave as rational and self-
interested agents in the pursuit of their objectives, and that resources are limited. Economic
behavior is therefore modeled as the solution of a constrained optimization problem”.

Ð Bill Sharkey, 1995


In economics optimization is a general heading which represents both minimization &
maximization problem. Optimization means "the quest for the best". In other words, the
maximization of any form of benefit and minimization of costs (of any form) is optimization.
It is maximization or minimization of objective functions without or with constraints.
Functions which do not involve constraints are referred to as unconstrained functions and
the process of optimization is said to be unconstrained or free optimization.
In economic optimization problems, the variables involved are often required to satisfy
certain constraints. In the case of unconstrained optimization problems, no restrictions have
been made regarding the value of the choice variables. But in reality, optimization of a certain
economic function should be in line with certain resource requirement or availability. This
emanates from the problem of scarcity of resources. For example; maximization of
production should be subject to the availability of inputs. Minimization of costs should also
satisfy a certain level of output. The other constraint in economics is the non-negativity
restriction. Although sometimes negative values may be admissible, most functions in
economics are meaningful only in the first quadrant. So, these constraints should be
considered in the optimization problems.
Constrained optimization deals with optimization of the objective function (the function to
be optimized) subject to constraints (restrictions). It is a means of studying rational behavior
of economic agents with resource constraint. In the case of linear objective and constraint
functions, the problems are solved using linear programming model. But when we face a
non-linear function, we use the concept of derivatives for optimization. This chapter focuses
on optimization of non-linear constrained functions.

3.2. Two variables problems with equality constraint


In the case of two choice variables, optimization problem with equality constraint takes the
form 𝑴𝒂𝒙/𝑴𝒊𝒏 : 𝒚 = 𝒇(𝒙𝟏 , 𝒙𝟐 ), 𝑺𝒖𝒃𝒋𝒆𝒄𝒕 𝒕𝒐 : 𝒈 (𝒙𝟏 , 𝒙𝟐 ) = 𝒄. This type of optimization
𝒙𝟏 ,𝒙𝟐
problem is commonly used in economics. Because, for the purpose of simplification, two
variable cases are assumed in finding optimum values. For example, in maximization of
utility using indifference approach, the consumer is assumed to consume two bundles of
goods.
Example: Max 𝑢(𝑥1 , 𝑥2 ), 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑝1 𝑥1 + 𝑝2 𝑥2 = 𝑀
In this section, we will see two methods of solving two variable optimization problems with
equality constraints.
i) Direct substitution Method:
One method in constrained optimization is to substitute equality constraints in to an
objective function. This is to convert a constrained optimization in to unconstrained
optimization problem by internalizing the constraint / constraints directly in to the objective
function.
The constraint is internalized when we express it as a function of one of the arguments of
the objective function and then substituting that argument by using the constraint. We can
then solve the internalized objective function by using the unconstrained optimization
technique. Such a technique of solving constrained optimization problems is called the
substitution method.
Consider the consumer problem in the above example.
𝑃2 𝑥2 = 𝑀 − 𝑃1 𝑥1
𝑚 𝑝1
𝑥2 = − 𝑥1
𝑃2 𝑝2
Now x2 is expressed as a function of x1. Substituting this value, we can eliminate x2 from the
equation.

⇒ 𝑀𝑎𝑥, 𝑢 = 𝑢(𝑥1 , 𝑥2 (𝑥1 ))


𝑑𝑢 ∂𝑢 ∂𝑢 ∂𝑥2
= + . =0
𝑑𝑥1 ∂𝑥1 ∂𝑥2 ∂𝑥1
−𝑝1
= 𝑀𝑢1 + 𝑀𝑢2 ( ⁄𝑝2 ) = 0
𝑝1
⇒ 𝑀𝑢1 = 𝑀𝑢2 .
𝑝2
𝑀𝑢1 𝑝1
⇒ =
𝑀𝑢2 𝑝2

Example 2: 𝑢 = 𝑥1 𝑥2 , 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜, 𝑥1 + 4𝑥2 = 120


4𝑥2 = 120 − 𝑥1

𝑥2 = 30 − 1⁄4 𝑥1
⇒ 𝑢 = 𝑥1 (30 − 1⁄4 𝑥1 ) = 30𝑥1 − 1⁄4 𝑥12
𝑑𝑢
FOC. (First Order Condition)𝑑𝑥 = 30 − 1⁄2 𝑥1 =0
1

⇒ 𝑥1 = 60, 𝑎𝑛𝑑, 𝑥2 = 15

𝑑2 𝑈
SOC (second order /sufficient) condition: = − 1⁄2 < 0, so utility is maximized at the
𝑑𝑥 2
equilibrium points.
Example 2: A firm faces the production function Q= 12K0.4 L0.4 and assume it can purchase
K and L at price per unit of 40 birr and 5 Birr respectively and it has a budget of 800 Birr.
Determine the amount of K and L which maximizes output.
Solution
The problem is Maximize Q= 12K0.4 L0.4
Subject to 40K+5L = 800
According to the theory of production, the optimization condition is written in such away
that the ratio of marginal product of every input to its price must be the same. That is
𝑀𝑃𝐾 𝑀𝑃𝐿
=
𝑃𝐾 𝑃𝐿

The marginal products can be obtained using the method of partial differentiation as
follows.
𝑀𝑃𝐾 = 4.8𝐾 −0.6 𝐿0.4……………………………………………………………….. (1)
𝑀𝑃𝐿 = 4.8𝐾 0.4 𝐿−0.6……………………….………………………………………. (2)
Substituting these marginal products and the given prices in the constraint function gives us
4.8𝐾 −0.6 𝐿0.4 4.8𝐾 0.4 𝐿−0.6
=
40 5
𝐾 −0.6 𝐿0.4 = 8𝐾 0.4 𝐿−0.6
Multiplying both sides by 𝐾 0.6 𝐿0.6 results in
L = 8K ……………………………………………………………………………….. (3)
Substituting (3) in the budget constraint we get
40K + 5(8K) =800
40K+ 40K = 800
80k =800
K=10
Thus, L= 8(10) =80
Therefore, this firm should employ 10 units of capital and 80 units of labor in the production
process to optimize its output.
2. The Lagrange Multiplier Method
When the constraint is a complicated function or when there are several constraints under
consideration, the substitution method could become very cumbersome. This has led to the
development of another simpler method of finding extrema of a function, the lagrangean
method. This method involves forming a lagrangean function that includes the objective
function, the constraint function and the variable, 𝜆, called the Lagrange multiplier. The
essence of this method is to convert a constrained extremum problem in to a form such that
the first order conditions of the unconstrained optimization problem can still be applied.
We may note here that the necessary condition, obtained above under the substitution
method, can also be obtained from an auxiliary function, to be termed as Lagrange Function.
This function is formed by the use of objective function and the constraints. Given the
function 𝑍 = 𝑓(𝑥, 𝑦) subject to 𝑔(𝑥, 𝑦) = 𝑃𝑋 𝑥 + 𝑃𝑌 𝑦 = 𝑀, to determine the amount of 𝑥 and
𝑦 which maximize the objective function using the Lagrange Multiplier Method, we should
involve the following steps.
Step 1 Rewrite the constraint function in its implicit form as
𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 = 0

Step 2 Multiply the constraint function by the Lagrange multiplier𝜆


𝜆(𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 )

Step 3 Add the above constraint to the objective function and thereby formulate the
Lagrange function that is a modified form of the objective function which includes the
constraints as follows:
𝐿(𝑥, 𝑦, 𝜆) = 𝑍(𝑥, 𝑦) + 𝜆(𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 )

Necessary condition, i.e. the first order condition for maximization is that the first order
partial derivatives of the Lagrange function should be equal to zero. Differentiating L with
respect to 𝑥, 𝑦 and 𝜆 and equating it with zero gives us.
𝜕𝐿 𝜕𝑍
= 𝜕𝑥 − 𝜆𝑃𝑥 = 0 ………………………………….…………………………… (4)
𝜕𝑥
𝜕𝐿 𝜕𝑍
= 𝜕𝑦 − 𝜆𝑃𝑦 = 0 ………………………………….…………………………… (5)
𝜕𝑦
𝜕𝐿
= 𝑀 − 𝑥𝑃𝑥 − 𝑦𝑃𝑦 = 0 ………………………………….…………………………… (6)
𝜕𝜆
From equation (4) and (5) we get
𝑍 𝑍𝑦
𝜆 = 𝑃𝑥 and 𝜆 = 𝑃
𝑥 𝑦

𝑍 𝑍𝑦 𝑍𝑥 𝑃
This means, 𝜆 = 𝑃𝑥 = 𝑃 or = 𝑃𝑥
𝑥 𝑦 𝑍𝑦 𝑦

The Lagrange Multiplier (λ): measures the effect of a unit change in the constant of
constraint “C” (constant of constraint) on the optimal value of the objective function. The
Lagrange multiplier approximates the marginal impact on the objective function caused by
a small change in the constant of the constraint.
Sufficient condition -To get the second order condition, we should partially differentiate
equations (4), (5) and (6). Representing the second direct partial derivatives by 𝑍𝑥𝑥 and 𝑍𝑦𝑦 ,
and the second cross partial derivatives by 𝑍𝑥𝑦 and 𝑍𝑦𝑥 , the border Hessian determinant
bordered with 0, 𝑔𝑥 and 𝑔𝑦 is
0 𝑔𝑥 𝑔𝑦 0 −𝑃𝑥 −𝑃𝑦
|𝐻̄ | = |𝑔𝑥 𝐿𝑥𝑥 𝐿𝑥𝑦 | = |−𝑃𝑥 𝑍𝑥𝑥 𝑍𝑥𝑦 | > 𝑜
𝑔𝑦 𝐿𝑦𝑥 𝐿𝑦𝑦 −𝑃𝑦 𝑍𝑦𝑥 𝑍𝑦𝑦

 If all the principal minors are negative, i.e., if |𝐻̄ |2 < 0, |𝐻̄ |3 < 0, |𝐻̄ |4 < 0, the
bordered Hessian is positive definite, and a positive definite Hessian always satisfies
the sufficient condition for a relative minimum.
 If the principal minors alternate consistently in sign from positive to negative, i.e., if
|𝐻̄ |2 > 0, |𝐻̄ |3 < 0, |𝐻̄ |4 > 0, etc., the bordered Hessian is negative definite, and a
negative definite Hessian always meets the sufficient condition for a relative
maximum.
i) Maximization
Example: Given the utility function of the consumer who consumes two goods x and y as
U (x, y) = (x+ 2) (y+1)
If the price of good x is 𝑃𝑥 = 4 birr, that of good y is 𝑃𝑦 = 6 Birr and the consumer has a fixed
budget of 130 birr. Determine the optimum values of 𝑥 and 𝑦 using the Lagrange multiplier
method,
Solution: Maximize U (x, y) = x y + x+ 2y + 2
Subject to 4x + 6y = 130
Now we should formulate the Lagrange function to solve this problem. That is
𝐿(𝑥, 𝑦, 𝜆) = x y + x+ 2y + 2 + 𝜆 (130 - 4x - 6y) ………………………………………… (7)
𝜕𝐿 𝜕𝐿 𝜕𝐿
Necessary conditions for utility maximization are 𝜕𝑥 = 0, 𝜕𝑦 = 0, 𝜕𝜆 = 0

𝜕𝐿
= (𝑦 + 1) − 4𝜆 = 0
𝜕𝑥

y = -1 + 4𝜆 ………………………………………………………… (8)
𝜕𝐿
= (𝑥 + 2) − 6𝜆 = 0
𝜕𝑦
𝑥 = −2 + 6𝜆 ………………………………………………………… (9)
𝜕𝐿
= 4𝑥 + 6𝑦 − 130 = 0
𝜕𝜆
4x+6y= 130 ………………………………………………………… (10)
Substituting the value of x and y explained in equation (8) and (9) in to (10) enables us to
determine
4 (−2 + 6 𝜆) + 6 (−1 + 4 𝜆) = 130
− 8 + 24 𝜆 − 6 + 24 𝜆 = 130
48 𝜆 = 144
𝜆 = 3
Therefore, x = -2+6(3)
x = -2 + 18 = 16
x=16
y = -1 + 4 (3)
y = 11
Second order sufficient condition for utility maximization is
0 𝑔𝑥 𝑔𝑦
|𝐻̄ | = |𝑔𝑥 𝐿𝑥𝑥 𝐿𝑥𝑦 |
𝑔𝑦 𝐿𝑦𝑥 𝐿𝑦𝑦

The second partial derivatives of the objective function and the first partial derivatives of the
constraint function are
𝜕2 𝐿 𝜕2 𝐿 𝜕2 𝐿 𝜕2 𝐿
𝐿𝑥𝑥 = 𝜕𝑥 2 = 0 𝐿𝑦𝑦 = 𝜕𝑦 2 = 0, 𝐿𝑥𝑦 = 𝜕𝑥𝜕𝑦 = 0 , 𝐿𝑦𝑥 = 𝜕𝑦𝜕𝑥 = 0

𝜕𝑔 𝜕𝑔
𝑔𝑥 = = 4, and 𝑔𝑦 = 𝜕𝑦 = 6
𝜕𝑥
Therefore, the bordered Hessian determinant of this function is
0 4 6
|𝐻̄ | = |4 0 1| = - 4(0-6) + 6 (4- 0) = 48 > 0
6 1 0
The second order condition, i.e., |𝐻̄ |2 > 0 which is negative definite, is satisfied for
maximization. Thus, the consumer maximizes utility when s/he consumes 11 units of good
y and 16 units of good x. The maximum utility is U = (16+2) (11+1) = (18) (12) = 216 units
which is similar to the value of the Lagrange function at these values of x, y and 𝜆 . The value
of the Lagrange multiplier 𝜆 is 3. It indicates that a one-unit increase (decrease) in the budget
of the consumer increases (decreases) his total utility by 3 units.
Example 2: Suppose the monopolist sells two products x and y and their respective demand
is Px = 100 - 2 x and Py = 80 – y. The total cost function is given as TC = 20x + 20y, when the
maximum joint product of the two outputs 60 unit. Determine the profit maximizing level of
each output and their respective price.
Solution: As it is known profit (𝜋) = TR - TC, where TR represents total revenue and TC
represents total cost.
TR= x P x + y P y = (100x - 2x2) + (80y - y2)
Thus 𝜋 = 100x - 2x2 + 80 y - y2 - 20x - 20 y
𝜋 = 80 x + 60 y – 2x2- y2
But this monopolist can maximize its profit subject to the production quota. Thus,
Maximize 𝜋 = 80x + 60 y- 2x2- y 2
Subject to x+ y = 60
To solve this problem, we should formulate the Lagrange function,
L (x, y,𝜆) = 80x + 60y - 2x2 - y 2 + 𝜆 (x+ y - 60) ……………………………………… (11)
First order conditions for maximum profit are
L𝑥 = 80 - 4x +𝜆 = 0
- 4x = - 80 -𝜆
1
𝜆 = 20 + 4 𝜆 ………………………………………………………… (12)

L𝑦 = 60 - 2y +𝜆 = 0
- 2y = - 60 -𝜆
1
y = 30 + 2 𝜆 ………………………………………………………… (13)
L𝜆 = x + y -60 = 0
x + y = 60 ………………………………………………………… (14)
Substituting equation (12) and (13) in equation (14), we get
1 1
20 + 4 𝜆 + 30 + 2 𝜆 = 60
3
50 + 4 𝜆 = 60
3
𝜆 = 10
4

40
𝜆=
3
1 40 1 (40)
Thus, x = 20 +4 ( 3 ) y = 30 + 2 3

= 20+ 3.33 = 30+6.67


x = 23.33 y = 36.67
Second order condition for maximum profit is
𝐿𝑋𝑋 = −4, 𝐿𝑌𝑌 = −2, 𝐿𝑋𝑌 = 𝐿𝑌𝑋 = 0
𝑔𝑋 = 1 & 𝑔𝑌 1
Therefore, the bordered Hessian determinant of the given function is
0 1 1
|𝐻̄ | = |1 −4 0 | = −1(−2 − 0) + 1(0 + 4) = 6 > 0
1 0 −2
The second order condition is satisfied for maximization of functions.
𝑃𝑥 = 100 − 2(23.33)
𝑃𝑥 = 53.34
𝑃𝑦 = 80 − 36.67
𝑃𝑦 = 43.33

Therefore, the monopolist maximizes its profit when it sells 23.33 of good 𝑥 at a price of
40
53.34 birr per unit and 36.67 units of good𝑦at a price of 43.33 birr per unit. 𝜆 = shows
3
that a one unit increase in total expenditure on inputs increases total profit of the monopolist
40
by units. In other words, if the constant of the constraint relaxes by one unit that is 𝑥 +
3
𝑦 = 61, then the value of the objective function increases by the value the Lagrange
multiplier.
ii) Minimization
Example 3: The firm can determine the least cost combination of inputs for the production
of a certain level of output Q. Given the production function Q= f (L, K) and the cost function
of the firm is C = LPL + KP k Where L = labor, K = capital, Q = output. Suppose the price of both
inputs to be exogenous, we can formulate the problem of minimizing the cost as
Minimizes C = PL L + P k k
Subject to Q = f (L, K)
To determine the amount of labor and capital that should be employed initially we should
formulate the Lagrange function. It is
𝐿 = 𝐿𝑃𝐿 + 𝐾𝑃𝐾 + 𝜆(𝑄 − 𝑓(𝐿, 𝐾) ………………………………………………………… (15)
First order conditions for a minimum cost are
𝐿𝐿 = 𝑃𝐿 − 𝜆𝑄𝐿 = 0
𝑃 𝑃
𝜆 = 𝑄𝐿 = 𝑀𝑃𝐿 ………………………………………………………… (16)
𝐿 𝐿

𝐿𝐾 = 𝑃𝐾 − 𝜆𝑄𝑘 = 0
𝑃 𝑃
𝜆 = 𝑄𝐾 = 𝑀𝑃𝐾 ………………………………………………………… (17)
𝐾 𝐾

𝐿𝜆 = 𝑄 − 𝑓(𝐾, 𝐿) = 0 ………………………………………………………… (18)


Where 𝑄𝐿 and 𝑄𝑘 represents marginal product of labor and capital respectively.
From equation (16) and (17), we get
𝑃 𝑃
𝜆 = 𝑀𝑃𝐿 = 𝑀𝑃𝐾 …………………………………………………………… (19)
𝐿 𝐾

Equation (19) indicates that, at the point of optimal input combination the input - price ratio
and the marginal product ratio have to be the same for each input. This ratio shows the
amount of expenditure per unit of the marginal product of the input under consideration.
Thus, the interpretation the Lagrange multiplier is the marginal cost of product at the
optimal condition. In other words, it indicates the effect of change in output on the total costs
of production, i.e., it measures the comparative static - effect of the constraint constant on
the optimal value of the objective function.
The first order condition indicated in equation (19) can be analyzed in terms of isoquants
and iso-costs as
𝑃 𝑀𝑃
𝜆 = 𝑃𝐿 = 𝑀𝑃𝐿 ………….………………………………………………… (20)
𝑘 𝑘
𝑀𝑃
The 𝑀𝑃 𝐿 represents the negative of the slope of the isoquant, which measures the marginal
𝐾
rate of technical substitution of labor to capital (MRTSLK).
𝑃
The 𝑃 𝐿 ratio shows the negative of the slope of the isocost. An isocost is a line which indicates
𝐾
the locus of input combinations which entail the same total cost. It is shown by the equation
𝐶 𝑃
C= PL L + P k K or K = 𝑃 - 𝑃𝐿 L
𝐿 𝑘

𝑃𝐿 𝑀𝑃
= 𝑀𝑃𝐿 indicates the fact that the isocost and isoquant lines are tangent to each other at the
𝑃𝑘 𝑘
point of optimal input combination.
Second order condition for minimization of cost.
A negative bordered Hessian determinant is sufficient to say the cost is at its minimum value.
That is
0 𝑄𝐿 𝑄𝐾
̄
|𝐻 | = | 𝑄𝐿 𝐿𝐿𝐿 𝐿𝐿𝐾 | < 0
𝑄𝐾 𝐿𝐾𝐿 𝐿𝐾𝐾
Example 4: Suppose a firm produces an output Q using labor L and capital K with production
function 𝑄 = 10𝐾 0.5 𝐿0.5 . If the output is restricted to 200 units, the price of labor is 10 birr
per unit and Price of capital is 40Birr per unit. Determine the amount of L and K that should
be employed at minimum cost. Find the minimum cost.
The problem is Minimize C = 10 L + 40K
Subject to 200 = 10𝐾 0.5 𝐿0.5
Formulating the Lagrange function
𝐿(𝐿, 𝐾, 𝜆) = 10𝐿 + 40𝐾 + 𝜆(200 − 10𝐾 0.5 𝐿0.5 ) ……………………………………… (21)
First order conditions
𝐿𝐿 = 10 − 5𝜆𝐾 0.5 𝐿−0.5 = 0
2𝐿0.5
𝜆= ………………..…………………………………………… (22)
𝐾0.5

𝐿𝐾 = 40 − 5𝜆𝐾 −0.5 𝐿0.5 = 0


8𝐾0.5
𝜆= ……………………………………………..…………………… (23)
𝐿0.5

𝐿𝜆 = 200 − 10𝐾 0.5 𝐿0.5 = 0


10𝐾 0.5 𝐿0.5 = 200 ………………………………………………………… (24)
From equation (22) and (23), we get
2𝐿0.5 8𝐾0.5
=
𝐾0.5 𝐿0.5

2L = 8K
L= 4K ………………………………………………………… (25)
Substituting equation (25) in to (24) gives us
𝐾 0.5 (4𝐾)0.5 = 20………………………………………………………… (26)
2K = 20
K = 10 and L = 4(10) = 40, 𝜆 = 4
Second order condition
Now we should check the second order condition to verify that cost of production is least at
K = 10 and L = 40. For cost minimization the determinant of the bordered Hessian matrix
must be less than zero.
0 𝑄𝐿 𝑄𝐾
|𝐻̄ | = | 𝑄𝐿 𝐿𝐿𝐿 𝐿𝐿𝐾 | < 0
𝑄𝐾 𝐿𝐾𝐿 𝐿𝐾𝐾
At L = 40 and K = 10

𝜕𝑄 𝐾 10
QL = 𝜕𝐿 = (5)√ 𝐿 = (5)√40 = 2.5

𝜕𝑄 𝐿 40
Q k = 𝜕𝑘 = (5)√𝐾 = (5)√10 = 10

LLL = 2.5 𝜆𝐾 0.5 𝐿−1.5 = 2.5(4)(10)0.5 (40)−1.5


= 0.125
L kk = 2.5𝜆𝐾 −1.5 𝐿0.5 = 2.5(4)(10)−1.5 (40)0.5
=2
𝐿𝐾𝐿 = 𝐿𝐿𝐾 = −2.5𝜆𝐾 −0.5 𝐿−0.5 = −2.5(4)(10)−0.5 (40)−0.5
= −0.5
Therefore, the determinant of the bordered Hessian matrix is
0 2.5 10
|𝐻̄ | = |2.5 0.125 −0.5|
10 −0.5 2
= - 2.5 (5+5) +10(-1.25 -1.25)
= - 2.5 (10) + 10 (-2.5)
|𝐻̄ | = −50 < 0
Thus, the firm can minimize its cost when it employs 10 units of capital and 40 units of labor
in the production process and the minimum cost is
C = 10 (40) + 40 (10)
Min. C = 400 + 400 = 800 birr
In this problem K, L and 𝜆 are endogenous. The Lagrange multiplier 𝜆 measures the
responsiveness of the objective function to a change in the constant of the constraint
function.
What happens to the value of the Lagrange function and the constrained function when total
output increases from 200 to 201? What about the amount of L and K? Compare the value of
the constrained function and that of the Lagrange function at this point. Interpret the value
of 𝜆.
Constrained Optimization of n - variable case
Given the objective function
Optimize 𝑍 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 . . . . , 𝑥𝑛 )
Subject to 𝑔(𝑥1 , 𝑥2 , 𝑥3 , . . . . , 𝑥𝑛 ) = 𝑐
Similar to our earlier discussion we ought to first formulate the Lagrange function. That is
𝐿 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 . . . . , 𝑥𝑛 ) + 𝜆(𝑐 − 𝑔(𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ))
The necessary condition for optimization of this function is that
𝐿𝜆 = 𝐿1 = 𝐿2 = 𝐿3 = 𝐿4 = − − −= 𝐿𝑛 = 0
The second order condition for optimization of this function depends on the sign of d2L
subject to 𝑑𝑔 = 𝑔1 𝑑𝑥1 + 𝑔2 𝑑𝑥2 + 𝑔3 𝑑𝑥3 +. . . . . +𝑔𝑛 𝑑𝑥𝑛 = 0 similar to our earlier discussion.
The positive or negative definiteness of 𝑑 2 𝐿 involves the Bordered Hessian Determinant
test. However, in this case the conditions have to be expressed in terms of the bordered
principal minor of the Hessian. Given the bordered Hessian as
0 𝑔1 𝑔2 𝑔3 . . 𝑔𝑛
𝑔1 𝐿11 𝐿12 𝐿13 . . 𝐿1𝑛
|𝑔 𝐿21 𝐿22 𝐿23 . . 𝐿2𝑛 |
2
|𝐻̄ | = 𝑔3 𝐿31 𝐿32 𝐿33 . . 𝐿3𝑛
| . . . . . . . |
. . . . . . .
𝑔𝑛 𝐿𝑛1 𝐿𝑛2 𝐿𝑛3 . . 𝐿𝑛𝑛
The successive bordered principal minors are

0 g1 g2 g3
0 𝑔1 𝑔2 g L11 L12 L13
|𝐻̄2 | = |𝑔1 𝐿11 𝐿12 | , H 3 = 1 etc
𝑔2 𝐿21 𝐿22 g2 L21 L22 L23
g3 L31 L32 L33

However, |𝐻̄ | = |𝐻̄𝑛 |.


|𝐻̄2 | Shows the second principal minor of the Hessian bordered with 0, 𝑔1 and 𝑔2 . 𝑑2 𝐿 is
positive definite subject to 𝑑𝑔 = 0 if and only if |𝐻̄2 |,|𝐻̄3 |,-----,|𝐻̄𝑛 | < 0. Conversely, 𝑑 2 𝐿 is
negative definite subject to 𝑑𝑔 = 0 if and only if |𝐻̄2 | > 0, |𝐻̄3 | < 0, |𝐻̄4 | > 0,…….
A positive definite 𝑑2 𝐿 is a sufficient condition for minimum value and negative definite 𝑑2 𝐿
is sufficient condition for maximization of the objective function. In our analysis above |𝐻̄2 |
is the one which contains 𝐿22 as the last element of its principal diagonal. |𝐻̄3 | is the one
which includes L33 as the last element of its principal diagonal etc.
Optimization when there is more than one equality constraint
Let us consider the optimization problem involves three variables and two constraints.
Optimize 𝑍 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 )
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) = 𝑐 1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) = 𝑐 2
As usual we should construct the Lagrange function by using the Lagrange multiplier 𝜆. Since
we have two constraint functions, we are required to incorporate two 𝜆s, i.e., 𝜆1 𝑎𝑛𝑑 𝜆2 in our
analysis. The Lagrange function is
𝐿 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 ) + 𝜆1 (𝑐1 − 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 )) + 𝜆2 (𝑐 2 − 𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ))
First order conditions for optimization
𝐿1 = 𝑓1 − 𝜆1 𝑔11 − 𝜆2 𝑔12 = 0
𝐿2 = 𝑓2 − 𝜆1 𝑔21 − 𝜆2 𝑔22 = 0
𝐿3 = 𝑓3 − 𝜆1 𝑔31 − 𝜆2 𝑔32 = 0
𝐿𝜆1 = 𝑐 1 − 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) = 0
𝐿𝜆2 = 𝑐 2 − 𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) = 0
When there is n - variables and m - constraints, the Lagrange function becomes
𝐿 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 , . . . . . , 𝑥𝑛 ) + ∑𝑚 𝑗 𝑖
𝑗=1 𝜆𝑗 [𝑐 − 𝑔 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 )]

In this case we will have m+ n variables in the Lagrange function and we will have also m+ n
simultaneous equations.
First order conditions are

𝐿𝑖 = 𝑓𝑖 − 𝜆𝑗 𝑔𝑗𝑖 = , (i = 1, 2, 3, ---, n) and (j= 1, 2, 3, --- m)

𝐿𝜆𝑖 = 𝑐 𝑗 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) = 0
Second order conditions for optimization of three variables and two constraints problem are
0 0 𝑔11 𝑔21 𝑔31
|0 0 𝑔12 𝑔22 𝑔32 |
̄
|𝐻 | = 𝑔11 𝑔12 𝐿11 𝐿12 𝐿13
|𝑔1 𝑔22 𝐿21 𝐿22 𝐿23 |
2
𝑔31 𝑔32 𝐿31 𝐿32 𝐿33
In this case, |𝐻̄3 | = |𝐻̄ |. Thus, for a maximum value, |𝐻̄2 | > 0, |𝐻̄3 | < 0 and for a minimum,|𝐻̄2 |
< 0, |𝐻̄3 | < 0. With the existence of n - variables and m - constraints, the second order
condition is explained as
0 0 0 . . 0 − 𝑔11 𝑔21 𝑔31 . . 𝑔𝑛1
| 0 0 0 . . 0 − 𝑔12 𝑔22 𝑔32 . . 𝑔𝑛2 |
0 0 0 . . 0 − 𝑔13 𝑔23 𝑔33 . . 𝑔𝑛3
. . . . . . − . . . . . .
| . . . . . . − . . . . . . |
0 0 0 . . 0 − 𝑔1𝑚 𝑔2𝑚 𝑔3𝑚 . . 𝑔𝑛𝑚
|𝐻̄ | = − − − − − − − − − − .− − − −.
𝑔1 𝑔21 𝑔31 . . 𝑔𝑛1 − 𝐿11 𝐿12 𝐿13 . . 𝐿1𝑛
| 12 |
𝑔1 𝑔22 𝑔32 . . 𝑔𝑛2 − 𝐿21 𝐿22 𝐿23 . . 𝐿2𝑛
𝑔13 𝑔23 𝑔33 . . 𝑔𝑛3 − 𝐿31 𝐿32 𝐿33 . . 𝐿3𝑛
| . . . . . . − . . . . . . |
. . . . . . − . . . . . .
𝑔1𝑚 𝑔2𝑚 𝑔3𝑚 . . 𝑔𝑛𝑚 − 𝐿𝑛1 𝐿𝑛2 𝐿𝑛3 . . 𝐿𝑛𝑛
Now we have divided the Bordered Hessian Determinant in to four parts. The upper left
area includes zeros only and the lower right area is simply a plain Hessian. The remaining
𝑗
two areas include the 𝑔𝑖 derivatives. These derivatives have a mirror image relationship to
each other considering the principal diagonal of the Bordered Hessian as a reference.
We can create several bordered principal minors from |𝐻̄ |. It is possible to check the second
order sufficient condition for optimization using the sign of the following bordered principal
minors:
|𝐻̄𝑚+1 |, |𝐻̄𝑚+2 |,………………….,|𝐻̄𝑛 |
The objective function can sufficiently achieve its maximum value when the successive
bordered principal minors alternate in sign. However, the sign of |𝐻̄𝑚+1 | is (-1) m+1 whereas
for minimum value the sufficient condition is that all bordered principal minors have the
same sign, i.e., (-1) m. This indicates that if we have an odd number of constraints, then sign
of all bordered principal minors will be negative and positive with even number of
constraints.

3.3. Inequality Constraints and Kuhn-Tucker Theorems


Nonlinear Programming
The problem of optimization of an objective function subject to certain restrictions or
constraints is a usual phenomenon in economics. Mostly, the method of maximizing or
minimizing a function includes equality constraints. For instance, utility may be maximized
subject to a fixed income that the consumer has and the budget constraint is given in the
form of equation. Such type of optimization is referred to as classical optimization. But
objective function subject to inequality constraints can be optimized using the method of
mathematical programming. If the objective function as well as the inequality constraints
is linear, we will use a method of linear programming. However, if the objective function and
the inequality constraints are nonlinear, we will apply the technique of nonlinear
programming to optimize the function.
Maximization problem
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 , . . . . . , 𝑥𝑛 )
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≤ 𝑘1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≤ 𝑘2
𝑔3 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≤ 𝑘3
: : :
𝑔𝑚 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≤ 𝑘𝑚 and 𝑥𝑗 ≥ 0 , (𝑗 = 1,2,3. . . . . , 𝑛)
Minimization Problem
It can be expressed in the form of
Minimize C =𝑓(𝑥1 , 𝑥2 , 𝑥3 , . . . . . , 𝑥𝑛 ))
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≥ 𝑘1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≥ 𝑘2
𝑔3 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≥ 𝑘3
: : : :
𝑔𝑚 (𝑥1 , 𝑥2 , 𝑥3 , . . . , 𝑥𝑛 ) ≥ 𝑘𝑚 , 𝑥𝑗 ≥ 0 (𝑗 = 1,2,3. . . . . , 𝑛)

Where C- represents total cost which is the objective function.


𝑥𝑗 - is the amount of output produced

𝑘𝑖 - is the constant of the constraint function

𝑔𝑖 - is the constraint function.


We have observed from the above expression that the nonlinear programming also includes
three ingredients. These are

• The objective function

• A set of constraints (inequality)

• Non - negativity restrictions on the choice variable


The objective function as well as the inequality constraints is assumed to be differentiable
with respect to each of the choice variables. Like linear programming we apply the ≤
constraints for maximization and minimization problem involves only ≥ constraints.
Now let us discuss Kuhn-Tucker conditions in two steps for the purpose of making the
explanation easy to understand.
Step 1
In the first step, let us take a problem of optimizing the objective function with non-
negativity restrictions and with no other constraints. In economics, the most common
inequality constraint is non-negativity constraint.
Maximize 𝜋 = f(x)
Subject to x ≥ 0
provided that the function is supposed to be continuous and smooth. Based on the restriction
x ≥ 0, we may have three possible results. As shown in the following figures.
When the local maximum resides in side the shaded feasible region as shown above at point
B of diagram (i), then we have an interior solution. In this case, the first order condition is
𝑑𝜋
similar to that of the classical optimization process, i.e. 𝑑𝑥 = 0.

Diagram (ii) shows that the local maximum is located on the vertical axis indicated by point
𝑑𝜋
C. At this point, the choice variable is 0 and the first order derivative is zero, i.e. = 0, at
𝑑𝑥
point C we have a boundary solution.
Diagram (iii) indicates that the local maximum may locate at point D or point E with in the
𝑑𝜋
feasible region. In this case, the maximum point is characterized by the inequality <0
𝑑𝑥
because the curves are at their decreasing portion at these points.
From the above discussion it is clear that the following three conditions have to be met so as
to determine the value of the choice variable which gives the local maximum of the objective
function.
𝑓 ′ (𝑥) = 0, and x > 0 (point B)
𝑓 ′ (𝑥) = 0, and x = 0 (point C)
𝑓 ′ (𝑥) < 0, and x = 0 (point D and E)
Combining these three conditions in to one statement given us
𝑓 ′ (𝑥) ≤ 0 𝑥 ≥ 0 and 𝑥𝑓 ′ (𝑥) = 0
𝑑𝜋
The first inequality indicates the information concerning . The second inequality shows
𝑑𝑥
the non-negativity restriction of the problem. The third part indicates the product of the two
quantities 𝑥 and 𝑓 ′ (𝑥). The above statement which is a combination of the three conditions
represents the first order necessary condition for the objective function to achieve its local
maximum provided that the choice variable has to be non-negative.
If the problem involves n - choice variables like
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 , . . . 𝑥𝑛 )
Subject to 𝑥𝑖 ≥ 0
The first order condition in classical optimization process is
𝑓1 = 𝑓2 = 𝑓3 = …………………………. = 𝑓𝑛 = 0
The first order condition that should be satisfied to determine the value of the choice variable
which maximizes the objective function is
𝑓𝑖 ≤ 0 𝑥𝑖 ≥0 and 𝑥𝑖 𝑓𝑖 = 0 (i =1, 2, 3, -------, n)
Where 𝑓𝑖 is the partial derivative of the objective function with respect to 𝑥𝑖 , i.e.,
𝜕𝜋
𝑓𝑖 = 𝜕𝑥 .
𝑖

Step 2
Now we continue to the second step. To do this, let us attempt to incorporate inequality
constraints in the problem. In order to simplify our analysis, let us first discuss about
maximization problem with three choice variables and two constraints as shown below.
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 )
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) ≤ 𝑘1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) ≤ k2
And x1, x2, x3 ≥ 0
Using the dummy variables s1 and s2 we can change the above problem in to
Maximize 𝜋 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 )
Subject to 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) + 𝑠1 = 𝑘1
𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) + 𝑠2 = 𝑘2
𝑥1 , 𝑥2 , 𝑥3 ≥ 0&𝑠1 , 𝑠2 ≥ 0
We can formulate the Lagrange function using the classical method provided that the non-
negativity constraints of the choice variables are not existed as
𝐿 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 ) + 𝜆1 [𝑘1 − 𝑔1 (𝑥1 , 𝑥2 , 𝑥3 ) − 𝑠1 ] + 𝜆2 [𝑘2 − 𝑔2 (𝑥1 , 𝑥2 , 𝑥3 ) − 𝑠2 ]
It is possible to derive the Kuhn- Tucker conditions directly from the Lagrange function.
Considering the above 3-variable 2-constraints problem
The first order condition is
𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿 𝜕𝐿
= = = 𝜕𝑠 = 𝜕𝑠 =𝜕𝜆 =𝜕𝜆 =0
𝜕𝑥1 𝜕𝑥2 𝜕𝑥3 1 2 2 1

However, 𝑥𝑗 and si variable are restricted to be non-negative. As a result, the first order
conditions on these variables ought to be modified as follows.
𝜕𝐿 𝜕𝐿
≤0 sj ≥ 0 and 𝑥𝑗 = 𝜕𝑥 = 0
𝜕𝑥𝑗 𝑗

𝜕𝐿 𝜕𝐿
≤0 𝑠𝑖 ≥ 0 and 𝑠𝑖 = 𝜕𝑠 = 0
𝜕𝑠𝑖 𝑖

𝜕𝐿
=0 Where (i = 1, 2 and j= 1, 2, 3)
𝜕𝜆𝑖

However, we can combine the last two lines and thereby avoid the dummy variables in the
𝜕𝐿
above first order condition as shown below. As 𝜕𝑠 = −𝜆𝑖 , the second line shows that
𝑖

−𝜆𝑖 ≤ 0, , 𝑠𝑖 ≥ 0 and – 𝑠𝑖 𝜆𝑖 = 0
or 𝜆𝑖 ≥ 0, 𝑠𝑖 ≥ 0 and 𝑠𝑖 𝜆𝑖 = 0
But, we know that 𝑠𝑖 = 𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 ). By substituting it in place of 𝑠𝑖 , we can get

𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 ) ≥ 0 , 𝜆𝑖 ≥ 0 and 𝜆𝑖 [𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 )] =0


Therefore, the first order condition without dummy variables in expressed as
𝜕𝐿 𝜕𝐿
≤ 0 𝑥𝑗 ≥ 0 and 𝑥𝑗 𝜕𝑥 = 0
𝜕𝑥𝑗 𝑗

𝜕𝐿
= 𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 ) ≥ 0 𝜆𝑖 ≥ 0 and 𝜆𝑖 [𝑘𝑖 − 𝑔𝑖 (𝑥1 , 𝑥2 , 𝑥3 )] =0
𝜕𝜆𝑖

These are the Kuhn - tucker conditions for the given maximization problem.
How can we solve minimization problem?
One of the methods to solve this problem is changing it in to maximization problem and then
applies the same procedure with maximization. Minimizing 𝐶 is similar to maximizing (−𝐶).
However, keep in mind the fact that we have to multiply each constraint inequalities by (−1).
We can directly apply the Lagrange multiplier method and determine the minimization
version of Kuhn - Tucker condition instead of converting the inequality constraints into
equality constraints using dummy variables as
𝜕𝐿 𝜕𝐿
≥0 𝑥𝑗 ≥ 0 and x j 𝜕𝑥 = 0
𝜕𝑥𝑗 𝑗

𝜕𝐿 𝜕𝐿
𝜕𝜆𝑖
≤0 𝜆𝑖≥0 and i 𝜕𝜆 = 0 (minimization)
𝑖
Example 5: Minimize C= x2+ y2
Subject to x y≥ 25
x, y ≥ 0
The Lagrange function for this problem is
𝐿 = 𝑥 2 + 𝑦 2 + 𝜆(25 − 𝑥𝑦)
It is a minimization problem. Therefore, the appropriate conditions are
𝜕𝐿 𝜕𝐿
= 2𝑥 − 𝜆𝑦 ≥ 0, 𝑥 ≥ 0 and 𝑥 𝜕𝑥= 0
𝜕𝑥

𝜕𝐿 𝜕𝐿
= 2𝑦 − 𝜆𝑥 ≥ 0, 𝑦 ≥ 0 and 𝑦 𝜕𝑦= 0
𝜕𝑦

𝜕𝐿 𝜕𝐿
= 25 − 𝑥𝑦 ≤ 0, 𝜆≥0 and 𝜆 𝜕𝜆 = 0
𝜕𝜆

Can we determine the non-negative value 𝜆 which will satisfy all the above conditions
together with the optimal solution x and y? The optimal solutions in our earlier discussion
𝜕𝐿 𝜕𝐿
are x=5 and y=5, which are nonzero. Thus, the complementary slackness ( x 𝜕𝑥 = 0, y 𝜕𝑦 = 0)
𝜕𝐿 𝜕𝐿
shows that 𝜕𝑥 = 0 and 𝜕𝑦= 0.

Thus, we can determine the value of 𝜆 by substituting the optimal values of the choice
variables in either of these marginal conditions as
𝜕𝐿
= 2x - 𝜆𝑦 = 0
𝜕𝑥

2(5) - 𝜆 (5) = 0
10 - 5𝜆 = 0
𝜆 =2>0
𝜕𝐿 𝜕𝐿 𝜕𝐿
This value 𝜆 = 2, x = 5 & y = 5 imply that = 0, = 0, = 0 which fulfils the marginal
𝜕𝑥 𝜕𝑦 𝜕𝜆
conditions and the complementary slackness conditions. In other words, all the Kuhn -
Tucker conditions are satisfied.
Example 6: Maximize 𝑍 = 10𝑥 − 𝑥 2 + 180𝑦 − 𝑦 2
Subject to 𝑥 + 𝑦 ≤ 80 and 𝑥, 𝑦 ≥ 0
Solution
First, we should formulate the Lagrange function assuming the equality constraint and
ignoring the non-negativity constraints.
𝐿 = 10𝑥 − 𝑥 2 + 180𝑦 − 𝑦 2 + 𝜆(80 − 𝑥 − 𝑦)
The first order conditions are
𝜕𝐿
= 10 − 2𝑥 − 𝜆 = 0 ⇒ 𝜆 = 10 − 2𝑥 − − − − − − − − − − − − − − − (1)
𝜕𝑥
𝜕𝐿
= 180 − 2𝑦 − 𝜆 = 0 ⇒ 𝜆 = 180 − 2𝑦 − − − − − − − − − − − − − (2)
𝜕𝑦
𝜕𝐿
= 80 − 𝑥 − 𝑦 = 0 ⇒ 𝑥 + 𝑦 = 80 − − − − − − − − − − − − − − − −(3)
𝜕𝜆
Taking equation (1) and (2) simultaneously
10 − 2𝑥 = 180 − 2𝑦
2𝑦 − 2𝑥 = 170
2𝑦 = 170 + 2𝑥
𝑦 = 85 + 𝑥 − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − (4)
If we substitute equation (4) in to (3), we get
𝑥 + 85 + 𝑥 = 80
2𝑥 = −5 ⇒ 𝑥 = −2.5
However, the value of the choice variables is restricted to be non-negative. Thus𝑥 ∗ = −2.5 is
infeasible. We must set x= 0 since it has to be non-negative. Now we can determine the value
of 𝑦 by substituting zero in place of 𝑥 in equation (3).
0 + 𝑦 = 80
𝑦 ∗ = 80
Therefore, 𝜆∗ = 180 − 2(80) = 20
The possible solutions are 𝑥 ∗ = 0, 𝑦 ∗ = 80, 𝜆∗ = 20
However, we must check the inequality constraints and the complementary slackness
conditions to decide whether these values are solutions or not.
1) Inequality constraints
i) The non-negativity restrictions are satisfied since 𝑥 = 0, 𝑦 = 80, 𝜆 = 20 ≥ 0
ii) Inequality constraints
𝑥 + 𝑦 ≤ 80
0 + 80 ≤ 80
2) Complementary Slackness conditions
𝜕𝐿 𝜕𝐿
i) 𝑥 𝜕𝑥 = 0, 𝑥 = 0 ⇒ 𝜕𝑥 < 0 as the problem is maximization.
𝜕𝐿
= −10 < 0
𝜕𝑥

𝜕𝐿 𝜕𝐿
ii) 𝑦 𝜕𝑦 = 0, 𝑦 = 80 ≠ 0 ⇒ 𝜕𝑦 = 0

𝜕𝐿
= 180 − 2(80) − 20 = 0
𝜕𝑦

𝜕𝐿 𝜕𝐿
iii) 𝜆 𝜕𝜆 = 0, 𝜆 = 20 ≠ 0 ⇒ 𝜕𝜆 = 0
𝜕𝐿
= 80 − 0 − 80 = 0
𝜕𝜆
All the Kuhn Tucker conditions are satisfied. Thus, the objective function is maximized
when𝑥 ∗ = 0, 𝑦 ∗ = 80, 𝜆 = 20.
Example 7: Given the revenue and cost conditions of a firm as 𝑅 = 32𝑥 − 𝑥 2 and𝐶 = 𝑥 2 +
8𝑥 + 4, where 𝑥 is output. Suppose the minimum profit is𝜋0 = 18. Determine the amount of
the output which maximizes revenue with the given minimum profit. In this case, the
revenue function is concave and the cost function is convex.
The Problem is
Maximize 𝑅 = 32𝑥 − 𝑥 2
Subject to 𝑥 2 + 8𝑥 + 4 − 32𝑥 + 𝑥 2 ≤ −18
And x ≥0
Under these situations the Kuhn-Tucker conditions are necessary and sufficient conditions
as all of the above three conditions, i.e., (1), (2), 4(3), are satisfied.
The Lagrange function of this problem is
𝐿 = 32𝑥 − 𝑥 2 + 𝜆(−22 − 2𝑥 2 + 24𝑥) − − − − − − − − − − − − − − − (1)
Thus,
𝜕𝐿
= 32 − 2𝑥 − 4𝜆𝑥 + 24𝜆 = 0 − − − − − − − − − − − − − − − − − −(2)
𝜕𝑥
𝜕𝐿
= −22 − 2𝑥 2 + 24𝑥 = 0 − − − − − − − − − − − − − − − − − − − −(3)
𝜕𝜆

From equation (3)


−22 − 2𝑥 2 + 24𝑥 = 02𝑥 2 − 24𝑥 + 22 = 0 − − − − − − − − − − − − − − − − − − − (4)
−3 1
Solving (4) we get, 𝑥 = 1 or 𝑥 = 11. 𝜆 = 0r 𝜆 = 2
2

However, we must check the inequality constraints and the complementary slackness
conditions to decide whether these values are the solutions or not
𝜕𝐿 𝜕𝐿
≤ 0, 𝑥≥0 and 𝑥 𝜕𝑥 = 0, -----------------------------(5)
𝜕𝑥

𝜕𝐿 𝜕𝐿
≥ 0, 𝜆≥0 and 𝜆 𝜕𝜆 = 0, -----------------------------(6)
𝜕𝜆

At X=1
𝜕𝐿 𝜕𝐿 −3
At this point 𝑥 > 0 this implies that𝜕𝑥 = 0. Thus, = 30 + 20𝜆 = 0 ⇒ 𝜆 = . It does not
𝜕𝑥 2
satisfy equation (6).
𝜕𝐿 𝜕𝐿 1
At X=11, 𝑥 > 0 this implies that = 0, Thus𝜕𝑥 = 10 − 20𝜆 = 0 ⇒ 𝜆 = 2. It satisfies both
𝜕𝑥
equation (5) and (6). This means, the Kuhn-Tucker conditions are fulfilled at𝑥 = 11.
Therefore, revenue is maximized when the firm sells 𝑥 = 11 units of output.

You might also like