11-08-2025 © Deep Kiran, IIT Roorkee (2025) 1
EEC-503: POWER SYSTEM
OPERATION AND CONTROL
LECTURE 10: Optimization Preliminaries
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 2
Equality constrained optimization problem
• Minimize 𝑓(𝑥) subject to 𝑔(𝑥) = 0
• Mathematical setup:
𝐿 𝑥, 𝜆 = 𝑓 𝑥 + 𝜆𝑔 𝑥
• Optimality condition for unconstrained problem:
𝜕𝐿 𝑥, 𝜆
= 0 = 𝛻𝑓 𝑥 + 𝜆𝛻𝑔 𝑥
𝜕𝑥
𝜕𝐿 𝑥, 𝜆
=0=𝑔 𝑥
𝜕𝜆
• Graphical illustration: optimize 𝑓(𝑥) subject to 𝑔(𝑥)
• Gradient of 𝑓(𝑥) and 𝑔(𝑥) must be collinear.
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 3
Collinearity in equality constrained optimization problem
min 𝑓 𝑥1 , 𝑥2 = 0.25𝑥12 + 𝑥22
subject to
𝜔 𝑥1 , 𝑥2 = 5 − 𝑥1 − 𝑥2 = 0
• Lagrange function will be
𝐿 = 𝑓 𝑥1 , 𝑥2 + 𝜆𝜔 𝑥1 , 𝑥2
𝐿 = 0.25𝑥12 + 𝑥22 + 𝜆(5 − 𝑥1 − 𝑥2 )
• At optimality,
𝜕𝐿
= 0.5𝑥1 + 𝜆 −1 = 0
𝜕𝑥1
𝜕𝐿
= 2𝑥2 + 𝜆 −1 = 0
𝜕𝑥2
0.5𝑥1
𝜆=− = 0.5𝑥1
−1
2𝑥2
𝜆=− = 2𝑥2
−1
5 − 2𝜆 − 0.5𝜆 = 0 ⇒ 𝜆 = 2
𝑥1 = 4; 𝑥2 = 1; 𝑓 = 5
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 4
Collinearity in equality constrained optimization problem
• min 𝑓 𝑥1 , 𝑥2 = 0.25𝑥12 + 𝑥22 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝜔 𝑥1 , 𝑥2 = 5 − 𝑥1 − 𝑥2 = 0
• Where does the optimal point lies?
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 5
Collinearity in equality constrained optimization problem
𝜕𝑓
𝜕𝑥1 0.5𝑥1
For x1 , 𝜆 = 𝜕𝑔 = =2
1
𝜕𝑥1
𝜕𝑓
𝜕𝑥2 2𝑥2
For x2 , 𝜆 = 𝜕𝑔 = =2
1
𝜕𝑥2
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 6
Multiple equality constrained optimization problem
min 𝑓 𝑥1 , 𝑥2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝜔1 𝑥1 , 𝑥2 = 0, 𝜔2 𝑥1 , 𝑥2 = 0, 𝜔3 𝑥1 , 𝑥2 = 0
𝐿 = 𝑓 𝑥1 , 𝑥2 + 𝜆1 𝜔1 𝑥1 , 𝑥2 + 𝜆2 𝜔2 𝑥1 , 𝑥2 + 𝜆3 𝜔3 𝑥1 , 𝑥2
• At optimum,
𝜕𝐿
=0
𝜕𝑥1
𝜕𝐿
=0
𝜕𝑥2
𝜕𝐿
=0
𝜕𝜆1
𝜕𝐿
=0
𝜕𝜆2
𝜕𝐿
=0
𝜕𝜆3
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 7
Optimization with equality and inequality constraints
• Minimize 𝑓(𝑥) subject to 𝑔(𝑥) = 0, ℎ(𝑥) ≤ 0
• Inequality constraints can be binding or non-binding.
• This is solved using KKT (Karush-Kuhn-Tucker) optimality condition.
• Mathematical setup:
𝐿 𝑥, 𝜆, 𝜇 = 𝑓 𝑥 + 𝜆𝑔 𝑥 + 𝜇ℎ(𝑥)
• KKT optimality condition:
𝜕𝐿 𝑥, 𝜆, 𝜇
= 0 = 𝛻𝑓 𝑥 + 𝜆𝛻𝑔 𝑥 + 𝜇𝛻ℎ 𝑥
𝜕𝑥
𝜕𝐿 𝑥, 𝜆, 𝜇
=0=𝑔 𝑥
𝜕𝜆
0 = 𝜇ℎ 𝑥
ൠ => Complimentary slackness condition
𝜇≥0
• For binding constraints, ℎ 𝑥 = 0 and 𝜇 ≠ 0
• For non-binding constraints, ℎ 𝑥 < 0 and 𝜇 = 0
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 8
Optimization with equality and inequality constraints
min 𝑓 𝑥1 , 𝑥2 = 0.25𝑥12 + 𝑥22
Subject to
𝜔 𝑥1 , 𝑥2 = 5 − 𝑥1 − 𝑥2 = 0
𝑔 𝑥1 , 𝑥2 = 𝑥1 + 0.2𝑥2 − 3 ≤ 0
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 9
Optimization with equality and inequality constraints
min 𝑓 𝑥1 , 𝑥2 = 0.25𝑥12 + 𝑥22
Subject to
𝜔 𝑥1 , 𝑥2 = 5 − 𝑥1 − 𝑥2 = 0
𝑔 𝑥1 , 𝑥2 = 𝑥1 + 0.2𝑥2 − 3 ≤ 0
𝐿 = 𝑓 𝑥1 , 𝑥2 + 𝜆𝜔 𝑥1 , 𝑥2 + 𝜇𝑔(𝑥1 , 𝑥2 )
𝐿 = 0.25𝑥12 + 𝑥22 + 𝜆 5 − 𝑥1 − 𝑥2 + 𝜇(𝑥1 + 0.2𝑥2 − 3)
• At optimality,
𝝁=𝟎 𝝁>𝟎
𝜕𝐿 𝑥1 + 0.2𝑥2 − 3 < 0 𝑥1 + 0.2𝑥2 − 3 = 0
• = 0.5𝑥1 + 𝜆 −1 + 𝜇(1) = 0
𝜕𝑥1
𝜕𝐿 0.5𝑥1 + 𝜆 −1 + 𝜇(1) = 0
𝜕𝐿 = 0.5𝑥1 + 𝜆 −1 = 0
𝜕𝑥1 2𝑥2 + 𝜆 −1 + 𝜇(0.2) = 0
• = 2𝑥2 + 𝜆 −1 + 𝜇(0.2) = 0 𝜕𝐿 5 − 𝑥1 − 𝑥2 = 0
𝜕𝑥2
= 2𝑥2 + 𝜆 −1 = 0 𝑥1 + 0.2𝑥2 − 3 = 0
𝜕𝐿 𝜕𝑥2
• = 5 − 𝑥1 − 𝑥2 = 0 𝜕𝐿 𝜇>0
𝜕𝜆 = 5 − 𝑥1 − 𝑥2 = 0
𝜕𝜆
• 𝜇 𝑥1 + 0.2𝑥2 − 3 = 0
5 − 2𝜆 − 0.5𝜆 = 0 ⇒ 𝜆 = 2 𝑥1 = 2.5; 𝑥2 = 2.5
•𝜇≥0
𝑥1 = 4; 𝑥2 = 1 𝜆 = 5.9375; 𝜇 = 4.6875
𝑓 = 7.8125
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 10
Optimization with equality and inequality constraints
• Where does this optimal point lies?
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 11
Nonlinear problem
𝑀𝑖𝑛. 𝑓 𝑥, 𝑢
subject to
𝑔 𝑥, 𝑢 = 0
ℎ 𝑥, 𝑢 ≤ 0
• 𝑢 = set of independent/ control/ optimization variables
• 𝑥 = set of dependent variables
• 𝑔 = equality constraints
• ℎ = inequality constraints
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 12
Newton’s First order gradient techniques
• Handling of inequality constraints
• Convert binding inequality constraints to an equality constraints.
• For violating inequality constraints, add quadratic penalty terms to the objective function
2
𝑃𝑣𝑖 = 𝛼 𝑉𝑖 − 𝑉𝑖
• 2ൢ 𝛼 is a scaling factor
𝑃𝑄𝑔𝑖 = 𝛼 𝑄𝐺𝑖 − 𝑄𝐺𝑖
• Constrained problem is converted into an unconstrained problem
𝐿 𝑥, 𝑢 = 𝑓 𝑥, 𝑢 + 𝜆𝑔 𝑥, 𝑢 + 𝑃𝑒𝑛𝑎𝑙𝑡𝑦
• For minimization problem, this is known as steepest descent technique.
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 13
Newton’s First order gradient techniques
• At the optimum,
𝜕𝐿
• = 0 = 𝑔 𝑥, 𝑢 … (1)
𝜕𝜆
𝜕𝐿 𝜕𝑓 𝑥,𝑢 𝜕𝑔 𝑥,𝑢 𝜕𝑃𝑒𝑛𝑎𝑙𝑡𝑦
• =0= +𝜆 + … (2)
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥
𝜕𝐿 𝜕𝑓 𝑥,𝑢 𝜕𝑔 𝑥,𝑢 𝜕𝑃𝑒𝑛𝑎𝑙𝑡𝑦
• =0= +𝜆 + … (3)
𝜕𝑢 𝜕𝑢 𝜕𝑢 𝜕𝑢
• Solution procedure:
1. Initial guess for 𝑢
2. Solve for (1) to get 𝑥
3. Solve (2) for 𝜆
𝜕𝐿
4. Evaluate from (3)
𝜕𝑢
𝜕𝐿
5. unew = uold − β , β is step size. Goto step 2.
𝜕𝑢
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 14
Attendance
• MS Teams Channel:
• Please join using the code: po04lh3
• Please ensure 75% of attendance for ETE.
• Please do take permission from me before sharing the slides with anyone!
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 15
Comments about the method
𝜕𝐿
• is a gradient vector.
𝜕𝑢
• Moving in the negative direction of this vector, we can move towards
minimum.
𝜕𝑔 𝜕𝑔
• and 𝜕𝑥 are Jacobian matrix.
𝜕𝑢
Handling of inequality constraints
Converting binding inequality to equality Penalty approach
Advantage Hard/ exact enforcement Soft approach
Leads to inexact binding of inequality
Disadvantage Problem size (matrix size) increases. constraint enforcement.
Choice of 𝛼 is not easy (heuristic)
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 16
Comments about the method
Advantage Disadvantage
Handling of inequality constraints
Builds upon already available package for
Choice of β is very difficult
solution of equality constraints
Convergence is extremely poor.
• First order gradient methods are least popular.
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 17
Newton’s Second order gradient method
• Attempts to satisfy all KKT conditions simultaneously.
• No need for distribution of variables like control, dependent, etc.
• Let 𝑧 be the complete set of variables, i.e., 𝑧 = [𝑥, 𝑢].
𝜕𝐿
• Optimality condition is 𝑔𝑑 = = 0 (nonlinear set of equations)
𝜕𝑧
• Using Taylor series expansion,
𝜕𝐿 𝜕2𝐿 1 𝜕 3𝐿 2
𝑔𝑑 = 0 = + Δz + Δz + higher order terms
𝜕𝑧 𝜕𝑧 2 2 𝜕𝑧 3
• Approximating
𝜕𝐿 𝜕 2𝐿
0− = Δz
𝜕𝑧 𝜕𝑧 2
𝑠𝑝𝑒𝑐 𝑐𝑎𝑙 𝜕 2𝐿
𝑔𝑑 − 𝑔𝑑 = [Δz]
𝜕𝑧 2
𝑐𝑎𝑙 𝜕2𝐿
⇒ 0 − 𝑔𝑑 = [Δz]
𝜕𝑧 2
𝑐𝑎𝑙 𝜕 2𝐿
⇒ −𝑔𝑑 = [Δz]
𝜕𝑧 2
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 18
Newton’s Second order gradient method
• At the optimum,
𝜕𝐿
• = 0 = 𝑔 𝑧 … (1)
𝜕𝜆
𝜕𝐿 𝜕𝑓 𝑧 𝜕𝑔 𝑧 𝜕𝑃𝑒𝑛𝑎𝑙𝑡𝑦
• =0= +𝜆 + … (2)
𝜕𝑧 𝜕𝑧 𝜕𝑧 𝜕𝑧
• Solution procedure
1. Initial guess for 𝑧
2. Evaluate the gradient vector
3. Check for convergence
4. If not converged, form and factorize the Hessian matrix
5. Solve for Δ𝑧
6. Update 𝑧
𝑧𝑛𝑒𝑤 = 𝑧𝑜𝑙𝑑 + 𝛽Δz
7. Goto step 2
11-08-2025 © Deep Kiran, IIT Roorkee (2025) 19
Comments about the method
Advantage Disadvantage
Very powerful convergence Handling of inequality constraint