2LP Lecture Notes
2LP Lecture Notes
Lecture notes
Dr Jeremy Budd,
School of Mathematics
University of Birmingham
Birmingham B15 2TT, UK
Email: j.m.budd@bham.ac.uk
Abstract
These are the lecture notes for the module “Linear Programming” covering LP modelling,
geometric method, duality theory, the simplex method and some extensions of the simplex
method.
Linear programming grew out of attempts to solve systems of linear inequalities, al-
lowing one to optimise linear functions subject to constraints expressed as inequalities.
The theory was developed independently at the time of World War II by the Soviet mathe-
matician Kantorovich, for production planning, and by Dantzig, to solve complex military
planning problems. Koopmans applied it to shipping problems and the technique enjoyed
rapid development in the postwar industrial boom. The first complete algorithm to solve
linear programming problems, called the simplex method, was published by Dantzig in
1947 and in the same year von Neumann established the theory of duality. In 1975, Kan-
torovich and Koopmans shared the Nobel Prize in Economics for their work and Dantzig’s
simplex method has been voted the second most important algorithm of the 20th century
after the Monte Carlo method. Linear programming is a modern and immensely powerful
technique that has numerous applications, not only in business and economics, but also
in engineering, transportation, telecommunications, and planning.
• explain why and when the simplex method fails to provide a solution and how to
resolve such a situation
• present, prove and use the results of duality theory and interpret them,
Acknowledgments
These lecture notes are based on lecture notes written by Dr Yun-Bin Zhao, Dr Hon
Duong, and Prof. Marta Mazzocco. I would like to thank them for sharing their notes.
Contents
Abstract 1
1.4 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 LP Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Duality theory 34
2
3
6.2.1 Pivoting: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Bibliography 128
Chapter 1
This chapter provides key concepts of LP that include its definition and terminologies,
standard and canonical forms. Some typically important realistic problems that can be
modelled and solved using LP are also discussed. The following topics will be covered:
• definition of LP,
• terminologies of LP,
• LP modelling.
• recognise and formulate some typical problems in real applications into LPs.
[BT97, Chapter 1] and [DT97, Chapter 1] discuss similar topics as in this chapter that
you may find useful.
5
6
This value is the optimal value of the objective function. Sometimes, we will care
more about the optimiser(s), the value(s) of the variable which produce this optimal
value. This is written
A good understanding of the theory and algorithms for linear programming is essential
for understanding
• game theory,
So, the theory of LP serves as a guide and motivating force for developing results for other
mathematical optimisation problems with either continuous or discrete variables.
Example 1.1. A small machine shop manufactures two models: standard and deluxe.
• The manufacturer has three grinders and two polishers. Therefore in 40 hours week
there are 120 hours of grinding capacity and 80 hours of polishing capacity.
• There is a net profit of £3 on each standard model and £4 on each deluxe model.
To maximise the profit, the manager must decide on the allocation of the available pro-
duction capacity to standard and deluxe models.
Let x1 and x2 be the numbers of standard and deluxe models manufactured, respectively.
For this small machine shop, there are two constraints (restrictions):
• Grinding restriction:
2x1 + 5x2 ≤ 120.
• Polishing restriction:
4x1 + 2x2 ≤ 80.
x1 ≥ 0, x2 ≥ 0. (W hy?)
xj ≥ 0, j = 1, ..., n.
LP in matrix notation:
minimise (maximise) cT x
subject to Ax (≥, =, ≤) b,
x ≥ 0,
where (≥, =, ≤) denotes a diagonal matrix in which the ii-th component is the symbol
9
A = [a1 , a2 , ..., an ]
1.4 Terminology
• The function
cT x = c1 x1 + ... + cn xn
• The set of all such points is called the feasible region (or feasible set).
Example 1.3.
minimise 2x1 + x2
subject to x1 + x2 ≤ 8,
x1 ≥ 0, x2 ≥ 0.
For this example, the problem has two decision variables x1 and x2 . The objective function
to be minimised is 2x1 + x2 . The LP problem is to find a point in the feasible region with
the smallest objective value.
1.5 LP Modeling
• A manufacturing firm has to decide on the mix of Ipads and IPhones to be produced.
• A market research indicated that at most 1000 units of Ipads and 4000 units of
IPhones can be sold per month.
• The unit profits of the Ipads and IPhones are £300 and £400 respectively.
It is desired to find the number of unites of Ipads and IPhones that the firm must produce
in order to maximise its profit.
Determine the decision variables. Suppose that the number of the units for Ipads is x1
and the number of the units for IPhones is x2 .
Determine the explicit constraints. There are two kinds of constraints: The market
constraints and work-hours restriction. For the former, we have
x1 ≤ 1000,
x2 ≤ 4000.
300x1 + 400x2 .
Determine the implicit constraints. The number of Ipads and IPhones cannot be neg-
ative, so x1 , x2 ≥ 0.
12
x1 ≤ 1000,
x2 ≤ 4000,
x1 , x2 ≥ 0.
Example 1.5 (The Workforce Planning Problem). Consider a restaurant that is open
seven days a week. Based on past experience, the number of workers needed on a partic-
ular day is given as follows:
Every worker works five consecutive days, and then takes two days off, repeating this pat-
tern indefinitely. How can we minimise the number of workers that staff the restaurant?
Define the variables: Let the days be numbers 1 through 7 and let xi be the number of
workers who begin their five-day shift on day i.
Let us consider Monday as an example. People work on Monday must start to work on
Thur, Fri, Sat, Sun and Monday. Thus the total number of workers on Monday is
x1 + x4 + x5 + x6 + x7
x1 + x4 + x5 + x6 + x7 ≥ 14.
Similarly, we may consider other days. Thus, the linear programming model for this
problem is given as follow.
13
7
X
min xi
i=1
s.t x1 + x4 + x5 + x6 + x7 ≥ 14 (M on),
x1 + x2 + x5 + x6 + x7 ≥ 13 (T ue),
x1 + x2 + x3 + x6 + x7 ≥ 15 (W ed),
x1 + x2 + x3 + x4 + x7 ≥ 16 (T hur),
x1 + x2 + x3 + x4 + x5 ≥ 19 (F ri),
x2 + x3 + x4 + x5 + x6 ≥ 18 (Sat),
x3 + x4 + x5 + x6 + x7 ≥ 11 (Sun),
xi ≥ 0, i = 1, ..., 7.
• There are m different types of food, F1 , . . . , Fm , that supply varying quantities of the
n nutrients, N1 , . . . , Nn , that are essential to good health.
b1 y1 + b2 y2 + · · · + bm ym . (1.1)
for j = 1, . . . , n.
14
We need to ensure that all the minimum daily requirements are met, that is,
y1 ≥ 0, y2 ≥ 0, . . . , ym ≥ 0. (1.3)
The problem is to minimise (1.1) subject to (1.2) and (1.3). This is exactly the following
LP problem:
minimise b1 y1 + b2 y2 + · · · + bm ym
s.t a1j y1 + a2j y2 + · · · + amj ym ≥ cj , j = 1, . . . , n,
y1 ≥ 0, y2 ≥ 0, . . . , ym ≥ 0.
• There are I production plants, P1 , ..., PI , that supply a certain commodity, and there
are J markets, M1 , . . . , MJ , to which this commodity must be shipped.
• Let cij be the cost of transporting one unit of the commodity from plant Pi to market
Mj .
1) Let xij be the quantity of the commodity shipped from plant Pi to market Mj .
The problem is to minimise the objective (1.4) subject to the constraints (1.5), (1.6) and
(1.7), i.e.,
I X
X J
minimise xij cij
i=1 j=1
J
X
s.t xij ≤ si , i = 1, . . . , I,
j=1
I
X
xij ≥ rj , j = 1, . . . , J,
i=1
xij ≥ 0, i = 1, . . . , I, j = 1, . . . , J.
• Let us suppose that only one person is allowed on a job at a time (two people cannot
work on the same job at the same time).
The problem is to choose an assignment of persons to jobs to maximise the total value.
100
X
xij ≤ 1, j = 1, . . . , 10, (1.9)
i=1
and
xij ≥ 0, i = 1, . . . , 100, j = 1, . . . , 10. (1.10)
• Inequality (1.8) reflects the fact that a person cannot spend more than 100 percentage
of their time working.
• Inequality (1.9) means that the total time spent on a job is not allowed to exceed a
day.
• (1.10) says that no one can work a negative amount of time on any job.
Subject to constraints (1.8), (1.9) and (1.10), we wish to maximise the total value,
100 X
X 10
aij xij . (1.11)
i=1 j=1
minimise cT x
subject to Ax = b,
x ≥ 0.
Throughout this course, we call the above minimization problem as a standard form. In
fact, any LP problem can be converted to this form.
dT x ≥ α ⇐⇒ dT x − t = α, t ≥ 0.
In matrix form:
Ax ≤ b ⇐⇒ Ax + s = b, s ≥ 0.
So, by multiplying the coefficients of the objective function by −1, the maximization
problem can be converted into a minimization problem.
Lj ≤ xj ≤ Uj ⇐⇒ xj ≤ Uj and xj ≥ Lj
Example 1.9. Write down the standard form of the following linear programming:
max 4x1 + x2 − x3
s.t. x1 + 3x3 ≤ 6,
3x1 + x2 + 3x3 ≥ 9,
x1 , x2 ≥ 0, x3 is unrestricted in sign.
Example 1.10. Write down the standard form of the following linear programming:
The LP problem
min cT x max cT x
s.t. Ax ≥ b (or in maximum form) s.t. Ax ≤ b
x≥0 x≥0
Later, we will see that the canomical form is very useful in the duality representation
of LPs. Introducing a slack vector, the canonical form can be easily converted to the
standard form
min cT x
s.t. Ax − s = b
x ≥ 0, s ≥ 0.
Conversely, a standard from can be easily transformed to a canonical form. (Do this
exercise)
19
Linear programming was developed during the second world war to plan expenditures
and returns in order to reduce costs to the army and increase losses to the enemy.
It was kept secret until 1947. Postwar, many industries found its use in their daily
planning.
• John von Neumann developed the theory of the duality in the same year.
• The linear programming problem was first shown to be solvable in polynomial time
by Leonid Khachiyan in 1979.
• A larger theoretical and practical breakthrough in the field came in 1984 when
Narendra Karmarkar introduced a new interior point method for solving linear
programming problems.
Chapter 2
In this chapter, we will study LPs with 2 variables and how to solve them using the
geometric method. Topics that will be covered:
At the end of this chapter students should be able to use geometric method to solve a
two variable LP problem determining whether it is infeasible, unbounded or has a finite
solution.
[DT97, Chapter 2] presents similar topics to this chapter that you may find helpful.
20
21
The following result contains the underlying idea of the geometric method for solving
LPs with two variables that we will study in this chapter.
Geometric solution:
• When the feasible region is unbounded, we might move the lane indefinitely while
always intersecting the feasible region, and hence the problem is unbounded .
• For a bounded and some unbounded feasible regions, the objective lane moves and
must stop at a certain point, otherwise it goes beyond the feasible region. In such
cases, the problem has a finite optimal solution.
• The feasible set might be empty, and hence the problem is infeasible.
Example 2.3 (Finite solution case). Solve the LP geometrically (or graphically)
minimise x1 + x2
subject to 2x1 + x2 ≤ 6,
−x1 + x2 ≤ 4,
x1 , x2 ≥ 0.
22
2 . Choose one point in the feasible set and draw the objective hyperplane (in the case
of in the direction of 2 variables this is just a line)
x2
-x1+x2=4
Objective
hyper plane
(0,0) x1 (0,0)
2x1+x2=6
-c
maximise x1 + x2
subject to −x1 + x2 ≤ 4,
x1 , x2 ≥ 0.
Still, we follow the above three steps and it is easy to see from Figure 2 that this problem
has unbounded solution (no finite optimal solution).
23
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-x1+x2=4
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x2 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Unbounded
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
feasible region
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx c
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
x1
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx c
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Objective function
minimise x1 + x 2
subject to −x1 + x2 ≥ 4,
x1 − 2x2 ≥ 6,
x1 , x2 ≥ 0.
1 . The problem has a finite optimal solution (unique optimal solution or infinitely
many optimal solutions).
In this chapter, we study basic knowledge in convex analysis that will be used in
subsequent chapters. The following topics will be covered
• representation of polyhedra.
[BT97, Chapter 2] contains similar material to this chapter that you may find useful.
{x : pT x = α}
where p is a nonzero vector called the normal or the gradient to the hyperplane.
24
25
{x : pT x ≥ α} or {x : pT x ≤ α}.
H = {x : pT x = α},
H + = {x : pT x ≤ α},
H − = {x : pT x ≥ α}.
P = {x : Ax ≤ b},
Convex set
Proof. Suppose x̂ and x̄ are elements in H. Then any point of the form
The proof for the half spaces is the same by changing the = signs into ≥ or ≤.
26
y
x
x y
Convex Nonconvex
be the intersection of Si ’s. We now prove that S is a convex set. Indeed, let x, y ∈ S.
We now prove that for any α ∈ (0, 1), we have x(α) = αx + (1 − α)y ∈ S. In fact, since
x, y ∈ S which is the intersection of Si ’s, the vectors x and y are in Si for all i = 1, ..., n.
By the convexity of Si , we must have
[Proof of (ii)] Since each half-space is convex, and a polyhedron is the intersection of
a finite number of half-spaces, any polyhedron is convex.
Proposition 3.4. If a linear programming problem has a finite optimal solution, then it
has either a unique solution or infinitely many solutions.
min cT x
s.t Ax = b,
x ≥ 0.
27
Suppose that the optimal solution of the LP is not unique. Then it has at least two
different optimal solutions, denoted by x
b and x̄.
Since x
b and x̄ are optimal solutions, they must be feasible to the problem, and the
objective values at these two solutions are equal, i.e.
Ax̂ = b, x̂ ≥ 0, Ax̄ = b, x̄ ≥ 0
and
cT x̂ = cT x̄ = z ∗ ,
Therefore x̂ and x̄ belong to the same hyperplane cT x = z ∗ and to the same polyhedron
{x|Ax = b} which are convex sets. This proves that any point in the segment αx̂+(1−α)x̄,
for α ∈ (0, 1) is a optimal solution.
P = {x : Ax ≤ 0}
Proof. If the set can be represented as P = {x : Ax ≤ 0} for some matrix A, then for any
x ∈ P we have αx ∈ P for any α ≥ 0. Thus, P is a cone.
Conversely, if P is a polyhedron and a cone, then there exists a matrix A and a vector
b such that P can be represented as
P = {x : Ax ≤ b}.
Moreover, by the definition of cone, for any x ∈ P , it must hold that αx ∈ P for any
α ≥ 0. In particular, by taking α = 0, we see that 0 ∈ P , thus b ≥ 0. Let α > 0 be any
28
P ⊆ {x : Ax ≤ 0} ⊆ P,
so P = {x : Ax ≤ 0}.
Convex function
Proof. This can be verified directly. Let x, y ∈ Rd and λ ∈ [0, 1], then
By subtracting the left-hand side from the right-hand side, the above equality is equivalent
to
λ(1 − λ)(x − y)2 ≥ 0,
A B
F C
E D
Extreme points
In summary,
Definition 3.8 (Extreme point). A point x in a convex set X is called an extreme point
of X, if x cannot be represented as a strict convex combination of two distinct points in
X. In other words, if x = λx1 + (1 − λ)x2 with 0 < λ < 1 and x1 , x2 ∈ X then it must
have x = x1 = x2 .
Definition 3.9 (Rays). Let d ̸= 0 be a vector and x0 be a point. The collection of points
of the form {x0 + λd : λ ≥ 0} is called a ray.
x0 is called the vertex of the ray, and d is the direction of the ray.
Definition 3.10 (Direction of a convex set). Given a convex set, a vector d ̸= 0 is called
direction (or recession direction) of the set, if for each x0 in the set, the ray {x0 + λd :
λ ≥ 0} also belongs to the set.
Therefore, starting at any point x0 in the set, one can recede along d for any step
length λ ≥ 0 and remain within the set.
30
Example 3.11.
and
x0 + λd = (x1 + λd1 , x2 + λd2 )T ∈ X, ∀λ ≥ 0.
Therefore,
x1 − x2 + λ(d1 − d2 ) ≥ −2,
x1 + λd1 ≥ 0,
x2 + λd2 ≥ 1,
for all λ ≥ 0. Since the last two inequalities must hold for fixed x1 and x2 and for all
λ ≥ 0, we conclude that d1 , d2 ≥ 0. Similarly, from the first two inequalities above, we
conclude that
d1 − 2d2 ≥ 0, d1 − d2 ≥ 0.
(d1 , d2 )T ̸= 0,
d1 ≥ 0, d2 ≥ 0, d1 ≥ 2d2 .
31
Example 3.14. 1. What is the extreme direction of the set X in Example 3.12?
P = {x ∈ Rn : Ax ≤ b} =
̸ ∅
Proof. This theorem can be proved directly using convex analysis. However, that ap-
proach is beyond the scope of this course. Instead, as will be seen in Theorem 5.5, the
set of extreme points of a polyhedron is the same as the set of basic feasible solution of
a linear programming problem defined on the polyhedron. Thus the number of extreme
points is the same as the number of basic feasible solution, which is finite (cf. Section
1).
min{cT x : x ∈ P }
has a finite optimal solution, then there is an optimal solution that is an extreme point.
Proof. Similarly as Theorem 3.15, this theorem can also be proved directly using convex
analysis. However, it can be seen as a direct consequences of Theorem 5.5 and Theorem
5.6 in Chapter 5.
d ≥ 0, d ̸= 0, and Ad ≤ 0.
To eliminate duplication, these directions may be normalized, e.g. eT d = 1. Then the set
of recession directions of X can be given by
D = {d : Ad ≤ 0, eT d = 1, d ≥ 0}.
2). Similarly, we may prove that the extreme directions of the nonempty polyhedral set
X = {x : Ax = b, x ≥ 0}
D = {d : Ad = 0, eT d = 1, d ≥ 0}.
Representation of a Polyhedron
Convex combination
2/3
Example 3.18. Write b = as a convex combination of three vectors
1/2
1 0 1
u1 = , u2 = , u3 =
0 1 1
α + β + γ = 1,
α, β, γ ≥ 0.
1 1 1
b = u1 + u2 + u3 .
2 3 6
n k
X l
X k
X
j i
P = x: x= λj x + µi d , λj = 1,
j=1 i=1 j=1
o
λj ≥ 0, j = 1, ..., k, µi ≥ 0, i = 1, ..., l
where {x1 , x2 , ..., xk } is the set of extreme points of P and {d1 , d2 , ..., dl } is the set of
extreme rays of P.
Chapter 4
Duality theory
In this chapter we will study an important topic of LP, namely duality theory. Duality
in linear programming is essentially a unifying theory that develops the relationships
between a given LP problem (the primal problem) and another related LP problem (the
dual problem). We also study the optimality condition, i.e., the so-called Karush-Kuhn-
Tucker (KKT) optimality condition, for LP problems. The duality theory and optimality
conditions form the backbone of linear and nonlinear optimisation problems.
• optimality conditions,
34
35
[BT97, Chapter 5] and [DT97, Chapter 5] contain relevant topics to this chapter that you
may find helpful.
Example 4.1 (minimising cost vs. maximising profit). There are two products, A and
B, that are made up from two elements, e and f . Product A contains 5 units of element
e and 2 units of element f and can be sold with price 16 Yuan, while product B contains
3 units of element e and 4 units of element f and can be sold with price 10 Yuan.
Let x and y be the units of product A and B that the customer buys. The above
problem can be formulated into a minimising LP problem as follows
subject to
Minimise cT x̄,
subject to
Ax̄ ≥ b,
x̄ ≥ 0,
36
where
x 5 3 100 16
x̄ = , A= , b = , c = .
y 2 4 60 10
Production’s perspective. Now the supplier want to assign price for each unit of element
e and f so that the customer’s need can be met and the profit is maximal. Let u and v
be the price that the supplier should assign to each unit of element e and f . His/her task
can be formulated into the following maximising LP problem
subject to
We also can write this LP in a matrix form using the notations above as
maximise bT ȳ,
subject to
AT ȳ ≤ c,
ȳ ≥ 0,
Suppose now in Example above we only know the minimisation problem. How do we
attempt to solve this? Since we are looking for a minimum value of the objective function,
we first need to find a lower bound for it. To do so, we combine the constraints to obtain
a lower bound of the type
u(5x + 3y) + v(2x + 4y) = (5u + 2v)x + (3u + 4v)y ≥ 100u + 60v, (4.1)
5u + 2v ≤ 16 and 3u + 4v ≤ 10.
37
Finally to achieve the best optimal lower bound for the objective function, we should
maximise the lower bound in (4.1)
subject to
5u + 2v ≤ 16,
3u + 4v ≤ 10,
u, v ≥ 0
This is exactly the problem from the production’s perspective in Example 4.1.
Duality deals with pairs of LPs and the relationship between their solutions. One
problem is called primal and the other the dual. It does not matter which problem is
called the primal since the dual of the dual is the primal (see Proposition 6.5 below).
Primal Problem:
(LP ) minimise cT x
s.t. Ax = b, x ≥ 0.
The key idea of duality begins by observing that every feasible x produces an upper bound,
cT x, for the minimum of (LP). Solving (LP) corresponds to minimising upper bounds.
But what if we instead maximise lower bounds?
maximise ξ T y
s.t. ???,
38
so that for every feasible y, ξ T y ≤ cT x for all feasible x. How is this possible, how can ξ
know about x for every feasible x? Well, one of the only things we know about x is that
Ax = b, so lets try ξ = b and see what happens. Then,
and so, since x ≥ 0 for all feasible x, all we need to get our desired bT y ≤ cT x, for all
feasible x, is to require that AT y ≤ c.
Dual problem:
(DP ) maximise bT y
s.t. AT y ≤ c,
where y ∈ Rm .
(DP ) maximise bT y
s.t. AT y + s = c, s ≥ 0
s.t. 3x1 + x2 − x3 = 4,
5x1 + 2x2 − x4 = 7,
x1 , x2 , x3 , x4 ≥ 0.
w1 + 2w2 ≤ 8
−w1 ≤ 0
−w2 ≤ 0.
39
When an LP is not given in standard form, we may first convert them to the standard
form, and then write down their dual problems.
s.t. 3x1 + x2 − x3 ≥ 4,
5x1 + 2x2 − x4 ≤ 7,
x1 , x2 , x3 ≥ 0, x4 is free.
(LP ) minimise cT x
s.t. Ax ≥ b, x ≥ 0,
(DP ) maximise bT y
s.t. AT y ≤ c, y ≥ 0.
s.t. 3x1 + x2 − x3 ≥ 4,
5x1 + 2x2 − x4 ≤ 7,
x1 , x2 , x3 , x4 ≥ 0.
(LP ) minimise cT x
s.t. Ax = b, x ≥ 0.
(DP ) maximise bT y
s.t. AT y + s = c, s ≥ 0.
40
We now consider the dual of (DP). First we transform this problem as the standard form.
Thus we have
s.t. AT y ′ − AT y ′′ + s = c, s ≥ 0, y ′ ≥ 0, y ′′ ≥ 0.
(DDP ) maximise cT z
−b
s.t. (AT , −AT , I)T z ≤ b ,
0
(DDP ) maximise cT z
It is equivalent to
s.t. −Az = b, z ≤ 0.
(DDP ) minimise cT w
s.t. Aw = b, w ≥ 0.
Denote by Fp and Fd the feasible regions of primal and dual problems, respectively,
i.e.
Fp = {x : Ax = b, x ≥ 0},
Fd = {(y, s) : AT y + s = c, s ≥ 0}.
The following result shows that the objective value at any primal feasible solution is at
least as large as the objective value at any feasible dual solution.
Theorem 4.7 (Weak duality Theorem). Let x be any feasible point of the primal problem,
i.e. x ∈ Fp and y be any feasible point of the dual problem, i.e., (y, s) ∈ Fd . Then
cT x ≥ bT y.
Proof. For any feasible point x ∈ Fp and (y, s) ∈ Fd , we have Ax = b and x ≥ 0, and
s = c − AT y ≥ 0, and thus
cT x − bT y = cT x − (Ax)T y = xT (c − AT y) = xT s ≥ 0.
The last inequality above follows form the fact that (x, s) ≥ 0.
The quantity cT x − bT y is often called the duality gap. From the weak duality theorem
we have the following immediate consequences:
Corollary 4.8. If the primal LP is unbounded (i.e., cT x → −∞) , then the dual LP must
be infeasible.
Corollary 4.9. If the dual LP is unbounded, then the primal LP must be infeasible.
Thus, if either problem has an unbounded objective value, then the other problem pos-
sesses no feasible solution.
Corollary 4.10. If x is feasible to the primal LP, y is feasible to the dual LP, and
cT x = bT y, then x must be optimal to the primal LP and y must be optimal to the dual
LP.
42
minimise −x1 − x2
s.t. x1 − x2 ≥ 1,
−x1 + x2 ≥ 1,
x1 , x2 ≥ 0.
maximise w1 + w2
s.t. w1 − w2 ≤ −1,
−w1 + w2 ≤ −1,
w1 , w2 ≥ 0.
The last corollary above identifies a sufficient condition for the optimality of a primal-
dual pair of feasible solutions, namely that their objective values coincide. One natural
question to ask is whether this is a necessary condition. The answer is yes, as shown by
the next result.
Theorem 4.12 (Strong Duality Theorem). If both the primal LP and the dual LP have
feasible solutions, then they both have optimal solutions, and for any primal optimal so-
lution x and dual optimal solution y we have that cT x = bT y.
Proof. As will be seen in Chapter 7 one can solve both the primal and the dual problem
simultaneously using the simplex method.
• One problem has unbounded objective value, in which case the other problem must
be infeasible.
In Summary,
P optimal ⇔ D optimal
P unbounded ⇒ D infeasible
D unbounded ⇒ P infeasible
Definition 4.13. A network N = (V, E) is a directed graph where V and E are respec-
tively the set of vertices (nodes) and edges (arcs). Given two vertices i, j ∈ V we denote
the edge between them by (i, j).
44
Definition 4.14. The capacity of an edge is the maximum amount of flow that can pass
through an edge in both directions. Formally it is a map c : E → R+ , called the capacity
map.
In most networks there are one or more nodes which are distinguished. A source is a
node such that all edges trough it are oriented away from it. Similarly a sink is a node
such that all edges trough it are oriented towards it.
• capacity constraint: The flow of an edge cannot exceed its capacity, in other words:
fij ≤ cij , for each (i, j) ∈ E;
• conservation of flows: The sum of the flows entering a node must equal the sum
P
of the flows exiting that node, except for the sources and the sinks: i:(i,j)∈E fij =
P
k:(jk)∈E fjk , for each v ∈ V \ {S, T }.
Example 4.16 (The max-flow problem). Let N = (V, E) be a directed network with only
one source S ∈ V , only one sink T ∈ V and a set of intermediate nodes. Assume the flow
is never negative, namely if along an edge (i, j) the flow fij is strictly positive, then the
flow fji in the opposite direction is 0. The maximal flow problem is to maximise the flow
from source to sink subject to the network edge capacities and conservation of flow:
X
maximise fSj
i:(S,j)∈E
In the context of flow analysis, there is only an interest in considering how units are
transferred between source and sink. Namely, we can re-write the above linear problem by
considering the whole paths from S to T . Let P be the set of all possible paths from S to
T . We associate with each path p ∈ P a quantity xp specifying how much of the flow from
45
S to T is being transferred along the path. With this view, the conservation constraint
above is unnecessary and we obtain the following simpler linear programming problem
X
maximise xp
p∈P
X
subject to xp ≤ cij , for each (i, j) ∈ E,
p∈P :(i,j)∈p
xp ≥ 0, for all p ∈ P.
Example 4.17 (The min-cut problem). In graph theory, a cut is a partition of the vertices
of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that
have one endpoint in each subset of the partition. These edges are said to cross the cut.
In a connected graph, each cut-set determines a unique cut, and in some cases cuts are
identified with their cut-sets rather than with their vertex partitions.
In a flow network, an s–t cut is a cut that requires the source and the sink to be in
different subsets, and its cut-set only consists of edges going from the source’s side to the
sink’s side. The capacity of an s–t cut is defined as the sum of the capacity of each edge
in the cut-set
In an unweighted undirected graph, the size or weight of a cut is the number of edges
crossing the cut. In a weighted graph, the value or weight is defined by the sum of the
weights of the edges crossing the cut.
Let N = (V, E) be a directed network with only one source S ∈ V , only one sink
T ∈ V and a set of intermediate nodes. Let yij denote the weight to assign to an edge
(i, j). The problem is to separate S and T with minimum total weighted capacity. The
condition that source and sink are separated can be stated as follows: for every path p ∈ P
from the source S to the sink T , the weight of the path is at least 1.
X
Minimise cij yij
(i,j)∈E
X
subject to yij ≥ 1 ∀p ∈ P
(i,j)∈p
yij ≥ 0 (i, j) ∈ E.
The max-flow min-cut theorem is a special case of the duality theorem. It states that
in a flow network, the maximum amount of flow passing from the source to the sink is
46
equal to the total weight of the edges in the minimum cut. The max-flow min-cut theorem
has many practical applications in networks, transportation and scheduling.
The optimality condition can be obtained by the strong duality theorem from the last
lecture.
The strong duality theorem provides the condition to identify optimal solutions:
minimise cT x
s.t. Ax = b, x ≥ 0,
AT ȳ ≤ c, (4.2)
Ax̄ = b, x̄ ≥ 0, (4.3)
bT ȳ = cT x̄. (4.4)
• Hence DP has an optimal solution ȳ with cT x̄ = bT ȳ, by the Strong Duality theorem.
Therefore (4.2) and (4.4) hold.
(⇐) Suppose there exists ȳ such that (4.2), (4.3), and (4.4) hold:
• Therefore, by the Strong Duality theorem, there exists x∗ optimal for LP and y ∗
optimal for DP, with cT x∗ = bT y ∗ .
• Therefore bT ȳ ≤ bT y ∗ = cT x∗ ≤ cT x̄.
Remark. Similarly, the optimality condition for the duality problem is the same as
above.
Therefore, by solving an optimality conditions, we can obtain the optimal solutions for
primal and dual problems at the same time.
The optimality conditions (4.2) - (4.4), can be restated by adding the slack variable
vector s̄ to inequality (4.2), and using the proof of the weak duality problem
cT x̄ − bT ȳ = x̄T s̄.
Therefore, we note that at the optimal solution x∗ and (ȳ, s̄), we have x̄T s̄ = 0, which is
equivalent to x̄j s̄j = 0 for every j = 1, ..., n since the nonnegativity of x̄ and s̄. Thus, at the
optimal solution, one of the terms x̄j and s̄j must be zero. This is called complementary
slackness condition.
minimise cT x
s.t. Ax = b, x ≥ 0
48
AT ȳ + s̄ = c, s̄ ≥ 0. (Dual feasibility)
minimise −2x1 − x2
−x1 − x2 ≥ −20,
x1 , x2 ≥ 0.
Verify that whether (15,0) and (20, 0) are optimal solutions to the problem.
• Introduce slack variables x3 and x4 and transform the problem to the standard form
minimise −2x1 − x2
−x1 − x2 − x4 = −20,
x1 , x2 , x3 , x4 ≥ 0.
x1 + 2x2 − x3 = 15,
−x1 − x2 − x4 = −20,
x1 , x2 , x3 , x4 ≥ 0,
y1 − y2 ≤ −2,
2y1 − y2 ≤ −1,
−y1 ≤ 0, −y2 ≤ 0,
• Consider the point (x1 , x2 ) = (15, 0). From the first two equalities above, we see that
(x3 , x4 ) = (0, 5) satisfies the first three conditions. Thus at x = (15, 0, 0, 5)T the
above optimality conditions are reduced to
y1 − y2 ≤ −2
2y1 − y2 ≤ −1
−y1 ≤ 0, −y2 ≤ 0
It is easy to very that this system is inconsistent (there exists no y satisfies these
conditions). Thus, (x1 , x2 ) = (15, 0) is not an optimal solution to the LP.
• Consider the point (x1 , x2 ) = (20, 0). Clearly, this point, together with (x3 , x4 ) =
(5, 0), satisfies the first three conditions of the optimality. We now check if there
exists a y satisfying the remaining conditions, i.e.,
y1 − y2 ≤ −2
2y1 − y2 ≤ −1
−y1 ≤ 0, −y2 ≤ 0
Clearly, (y1 , y2 ) = (0, 2) satisfies these conditions, thus the point (x1 , x2 ) = (20, 0)
is an optimal solution to the LP problem.
You are given the information that x2 and x3 are positive in the optimal solution. Use
the complementary slackness condition to solve the dual problem.
50
Since x2 and x3 are positive in the optimal solutions, the corresponding constraints in
the dual problem become equalities. Thus y = (y1 , y2 ) is an optimal solution to the dual
problem if and only if it satisfies the first constraint and the following system
40
Solving this system gives y1 = 7
, y2 = − 47 , which satisfies the first constraint since
32
y1 + 2y2 = < 5.
7
Thus it is an optimal solution to the dual problem.
Note that we can also use this information to find the optimal solution to the primal
problem. In fact, since the first constraint in the dual problem is an inequality, by the
complementary slackness conditions, the first variable in the primal problem is zero, that
22 26
is x1 = 0. Substituting this to the primal constraints, we obtain x2 = 7
, x3 = 7
, which
are nonnegative. Also we can verify that the objective values coincide:
368
10y1 + 8y2 = 5x1 + 12x2 + 4x3 = .
7
Thus y = (40/7, −4/7) and x = (0, 22/7, 26/7) are respectively optimal solutions to the
primal and the dual problems.
If the solution satisfies that (x∗ )T s∗ = 0 and x∗ + s∗ > 0. Such solution is called strictly
complementary solution. The following result shows that an LP always has such a solution,
Theorem 4.22 (Strict Complementarity). If primal and dual problems both have feasible
solutions, then both problems have a pair of strictly complementary solutions x∗ ≥ 0 and
s∗ ≥ 0 with
(x∗ )T s∗ = 0, x∗ + s∗ > 0.
The proof of this theorem can be found in [Gre94] where it is proved that a primal-
dual pair of optimal solutions is unique if and only if it is a strictly complementary pair
of basic solutions, see also Remark 7.4 in Chapter 8 for more discussions.
minimise x1 + x2 + 6x3
s.t. −x1 + x2 + x3 ≥ 2,
x1 − 2x2 + 2x3 ≥ 6,
x1 , x2 , x3 ≥ 0.
Example 4.24. Using duality theory to prove the Farkas’s Lemma: Only one of the
following systems has a solution (feasible)
• (System 1) Ax = b, x ≥ 0.
• (System 2) AT y ≤ 0, bT y > 0.
Proof. If System 1 has a solution, then for any y such that AT y ≤ 0 we have
bT y = (Ax)T y = xT (AT y) ≤ 0.
The inequality follows from the fact that AT y ≤ 0 and x ≥ 0. So, the System 2 is
impossible to have a solution.
minimise 0T x
s.t. Ax = b, x ≥ 0
is infeasible. According to the duality theorem, its dual problem which is given as
maximise bT y
s.t. AT y ≤ 0
52
[BT97, Chapter 3] and [DT97, Chapter 3] contain relevant material to this chapter that
you may find useful.
53
54
For matrices, we may perform some elementary row operations. These operations are
most helpful in solving a system of linear equations and in finding the inverse of a matrix.
By row operations, A′ can be upper triangular, and then we can solve the system by back
substitution. The process of reducing A to an upper triangular matrix with ones on the
diagonal is called Gaussian Reduction of the system, by further reduction, we may reduce
A to the identity matrix, the process is called a Gauss-Jordan reduction of the system.
min{cT x : Ax = b, x ≥ 0},
Theorem 5.2 (Existence of Optimal Extreme Point). Assume that the feasible region is
nonempty. Then
• Otherwise, if there is an extreme direction such that cT dj < 0, then the optimal
objective value of LP is unbounded.
Proof. Let x1 , x2 , ..., xk be the extreme points of the constraint set. And let d1 , ..., dl be
the extreme directions. By Minkowski’s theorem, any feasible point x can be represented
as
k
X l
X
j
x= λj x + µj dj ,
j=1 j=1
where
k
X
λj = 1, λj ≥ 0, j = 1, ..., k,
j=1
µj ≥ 0, j = 1, ..., l.
Since the µj ’s can be made arbitrarily large, the minimum is −∞ if cT dj < 0 for some j.
Thus, the necessary condition for the existence of the optimal solution is
cT dj ≥ 0, ∀j = 1, ..., l.
56
If this condition holds, we can show that the optimal solution exists and attains at an
extreme point. In fact, if cT dj ≥ 0 for j ≤ l, in order to minimize the objective function,
µj should be taken to be zero for all j. In order to minimize the first term of the objective,
we simply find the minimum cT xj , say cT xp , and let λp = 1, and all other λj ’s equal to
zero.
• In summary, the optimal solution of the linear programming is finite if and only if
cT dj ≥ 0 for all extreme directions.
• Furthermore, if this is the case, then we find the minimum point by picking the
minimum objective value among all extreme points.
• This shows that if an optimal solution exists, we must be able to find an optimal
extreme point.
• If the current extreme pint is not optimal, how to move from it to the next better
extreme point?
To address the first issue, we need the concept of Basic Feasible Solution (or basic
feasible point).
Ax = b, x ≥ 0.
A = [B, N ],
57
i.e.,
BxB + N xN = b.
Note that
xB = B −1 b, xN = 0
is a solution to the above system. This solution is called a basic solution. Thus a basic
solution has at least n−m zero variables. In addition, if x is a basis solution then columns
of A corresponding to non-zero variables of x are linearly independent.
• xB —– basic variable
• xN —– nonbasic variable
Remark 1. The possible number of basic feasible solutions is bounded by the number of
ways of extracting m columns from A to form the basis, so in general,
n n!
the number of basic feasible solution ≤ = .
m m!(n − m)!
x1 + x2 + x3 = 6,
x2 + x4 = 3,
58
x1 , x2 , x3 , x4 ≥ 0.
The basic feasible solution corresponds to finding 2 × 2 invertible submatrix B. The fol-
lowing are possible ways of extracting B out of A.
1 1 x1 3 x 0
1. B = [a1 , a2 ] = , xB = = B −1 b = , xN = 3 = .
0 1 x2 3 x4 0
1 0 x1 6 x 0
2. B = [a1 , a4 ] = , xB = = B −1 b = , xN = 2 = .
0 1 x4 3 x3 0
1 1 x2 3 x 0
3. B = [a2 , a3 ] = , xB = = B −1 b = , xN = 1 = .
1 0 x3 3 x4 0
1 0 x2 6 x
4. B = [a2 , a4 ] = , xB = = B −1 b = , xN = 1 =
1 1 x4 −3 x3
0
.
0
1 0 x3 6 x1 0
5. B = [a3 , a4 ] = , xB = = B −1 b = , xN = = .
0 1 x4 3 x2 0
Note that the points corresponding to 1,2,3, and 5 are basic feasible solutions. The
point obtained in 4 is not a basic feasible solution because it violates the nonnegativity
restrictions. In other words, we have four basic feasible solutions, namely
3 6 0 0
3 0 3 0
x1 =
, x2 = , x3 = , x4 = .
0 0 3 6
0 3 0 3
These points belong to R4 since after introducing the slack variables we have four variables.
There basic feasible solutions, projected in R2 —that is, in the (x1 , x2 )-plane — give rise
to the following four points (3, 3), (6, 0), (0, 3) and (0, 0). These points are precisely the
extreme points of the feasible region.
59
Theorem 5.5. A point is a basic feasible solution if and only if it is an extreme point.
Ay = ByB + N yN = b = Az = BzB + N xN
Now suppose that x is not a basic feasible solution, we need to show that it is not
extreme. Let P = {i : xi > 0} and O = {i : xi = 0}. Since x is feasible Ax =
AP xP + AO xO = b, thus AP xP = b. Since x is not a basic feasible solution, the columns
of AP are linearly dependent. As you have seen in Linear Algebra, there exists a non-zero
k
yP∈ R , where k is the cardinality of P , such that AP yP = 0. We take yO = 0 and
vector
yP
y = . Consider two points x + ϵy and x − ϵy for small ϵ. For ϵ > 0 small enough,
yO
these two points are both feasible since A(x ± ϵy) = Ax ± ϵy = b and x ± ϵy ≥ 0. Hence
1 1
x = (x + ϵy) + (x − ϵy).
2 2
So x is not extreme, as desired.
Thus, the set of basic feasible solutions coincides with the set of extreme points (the
former is an algebraic description and the latter is a geometric description). Since an LP
with a finite optimal value has an optimal solution at an extreme point, an optimal basic
feasible solution can always be found for such problems.
ii) If there is an optimal solution for LP, there is a basic feasible solution that is optimal.
60
a1 x1 + . . . + ap xp = b, (5.1)
Case 2: If {a1 , . . . , ap } are linearly dependent. Then there exist y1 , . . . , yp , not all zero,
such that
y1 a1 + . . . + yp ap = 0. (5.2)
Without loss of generality we can assume that yi > 0 for some i ∈ {1, . . . , p} (otherwise,
we can multiply the above equation by −1). From (5.1) and (5.2) we deduce that
Hence A(x − εy) = b where y = (y1 , . . . , yp , 0, . . . , 0)T . For sufficiently small ε we have
x1 − εy1 ≥ 0, . . . , xp − εyp ≥ 0.
We choose ε by
nx o
i
ε = min i = 1, . . . , p; yi > 0 .
yi
Then we have
(x − εy) ≥ 0 and A(x − εy) = b.
Thus (x − εy) is a feasible solution. In addition, due to the choice of ε, the first p
coordinates of (x − εy) are non-negative and at least one of them is zero. Hence we reduce
p by at least 1. By repeating this process, we will either Case 1 or p = 0. In both cases,
there will be a basic feasible solution. This completes the proof of Part i).
61
Part ii). Suppose that x is an optimal solution. Without loss of generality we assume
that x has minimal number of positive ordinates. We will construct a basic feasible
optimal solution from x. If x = 0 then x is basic (and the cost is zero). If x ̸= 0, let
I = {i1 , . . . , ip } is the set of index such that xij > 0. Similarly as in Part i), there are two
cases
Case 1: the column vectors {ai1 , . . . , aip } of A are linearly independent. This case we can
proceed exactly the same as in Part i) and obtain a basic feasible optimal solution. Note
that the optimality does not change in this case.
Case 2: the column vectors {ai1 , . . . , aip } of A are linearly dependent. We also proceed
similarly as in the second case in Part i. However, we need to show that the optimality is
preserved during the process. To show that x − εy is optimal, we consider its objective
value
cT (x − εy) = cT x − εcT y
For sufficiently small |ε|, the vector x − εy is a feasible solution (for positive or negative
values of ε). It follows that we must have cT y = 0, since otherwise, we could find a small
ε of the proper sign such that cT (x − εy) < cT x, which would violate the optimality of x.
Thus x − εy is still optimal.
Let us see how to construct a BFS from the feasible point x = (3, 2, 1, 1, 1)T . The given x
has no zero entries, so if we try to replicate the proof of the theorem 5.6, we have m = 3
and p = 5. Therefore we need to solve the system Ay = 0 and pick one solution, for
example y = (0, 0, 1, −1, −1)T .
n
Note that y has only one component y3 that is strictly positive, so ε = min xyii i =
o
1, . . . , p; yi > 0 = xy33 = 1 and x − y = (3, 2, 0, 2, 2)T . We take this as our new x̃ =
62
(3, 2, 0, 2, 2)T this is still a feasible point. It has one coordinate equal to 0, so let us take
−1 3 0 0
à = [a1 , a2 , a4 , a5 ] = 0 2 −1 0 .
0 0 −1 1
Let us solve the system Ãy ′ = 0 and pick one solution, for example y ′ = (3, 1, 2, 2)T . We
n o n o
calculate ε = min x̃ỹii i = 1, . . . , 5; yi > 0 = min 33 , 21 , 22 , 22 = 1. Again we take x̃ − ỹ =
(0, 1, 0, 0, 0)T where ỹ = (3, 1, 0, 2, 2)T . This is a BFS. Infact, if we select the matrix B
made of the second, fourth and fifth columns of A, we can see that (1, 0, 0)T = B −1 b.
In the next section, we are going to study how to solve LPs by simplex method.
• The simplex method is to proceed from one BFS to an adjacent or neighboring one
in such a way as to improve the value of the objective function (moving from an
extreme point to another with a better at least not worse objective).
• It also discovers whether the feasible region is empty, and whether the optimal
solution is unbounded. In practice, the method only enumerates a small portion of
the extreme points of the feasible region.
• The key to the simplex method lies in recognizing the optimality of a given extreme
point solution based on local consideration without having to enumerate all extreme
points or basic feasible solutions.
• The basic idea of the simplex method is to confine the search to extreme points of
the feasible region in a most intelligent way. The key for the simplex method is to
make computers find extreme points.
63
For simplicity, throughout the remainder of this course we always assume that the
problem is in standard form
min cT x
s.t. Ax = b, x ≥ 0,
where A is an m × n matrix with full rank, i.e., rank(A) = m, and b a vector in Rm , and
c is a vector in Rn .
For this given basis B, we would like to check if it is an optimal solution of the problem.
If it is not, we would like to find another basic feasible solution at which the objective
value is smaller than the current objective value f0 .
Let x be a feasible solution of Ax = b, and let xB and xN be its basic and nonbasic
variables for this given basis. Thus, we have
xB
b = Ax = [B, N ] = BxB + N xN .
xN
Multiplying by B −1 we have
xB = B −1 b − B −1 N xN .
64
f = cT x
= cTB xB + cTN xN
= f0 − (z T − cTN )xN ,
B −1 N = [B −1 am+1 , . . . , B −1 an ] = [y m+1 , . . . , y n ]
zj = cTB B −1 aj = cTB y j , j ∈ R.
For this problem, xB can be viewed as a slack variable, so the above LP can be further
rewritten as
X
Minimize f = f0 − (zj − cj )xj
j∈R
X
subject to (y j )xj ≤ b̄ (5.4)
j∈R
xj ≥ 0, j ∈ R.
Definition: The value −(zj −cj ) = cj −zj is referred to be as reduced cost coefficient.
65
Proposition 5.8. From (5.4), we see that if (zj − cj ) ≤ 0 for all j ∈ R, then the current
basic feasible solution is optimal.
P
Proof. Since zj − cj ≤ 0 for all j ∈ R, from f = f0 − j∈R (zj − cj )xj we see that f ≥ f0
for any feasible solution. For the current (basic) feasible solution, we know that f = f0
since xj = 0 for all j ∈ R. So, the current feasible solution is optimal.
then LP has an unbounded solution (or simply, we say that the LP is unbounded)
since we can send xj to +∞ in (5.4).
has a positive component. In this case, the current objective value can be further
decreased (if the current BFS is non-degenerated).
min x1 + 3x2
s.t. Ax = b, x ≥ 0,
where the constraint matrix A and the vector b are the same as in example 5.4:
1 1 1 0 6
A = [a1 , a2 , a3 , a4 ] = , b = .
0 1 0 1 3
Consider the basic feasible solution xT = (3, 3, 0, 0). For this solution B = [a1 , a2 ], N =
[a3 , a4 ], f0 = 12, cTB = (1, 3), cTN = (0, 0).
66
Then
1 −1 1 0 1 −1
[y 3 , y 4 ] := B −1 N = = ,
0 1 0 1 0 1
and
z T := cTB B −1 N = (1, 2),
so that z3 − c3 = 1 and z4 − c4 = 2.
we see that it has a positive component, so we know that the value f0 = 12 is not optimal.
Similarly, for j = 4, zj − cj > 0. Let us look at the corresponding y j , namely
1 −1 0 −1
y 4 = B −1 a4 = = ,
0 1 1 1
again we have a positive component, which tells us that there is scope for improvement.
We shall continue this example at the end of the next subsection, after explaining how
to construct a new BFS that improves the objective value.
The simplex procedure moves from one BFS to an adjacent one, by introducing a non
zero component into the zero part of the BFS, and removing a non zero component from
it.
• Leaving: If
b̄r b̄i
≡ min : yik > 0
yrk 1≤i≤m yik
then xBr (the rth component of xB ) may be set to 0, namely it “leaves” the basis.
3. Otherwise, according to the above rule to choose entering variable, and leaving variable,
to form new basis, and get improved basic feasible solution.
Let us explain why this works. We start from a given BFS, (b̄, 0, . . . , 0). Choose one
of the positive number like zk − ck , and perhaps the most positive of all the zj − cj , j ∈ R.
Set
xj = 0, for j ∈ R − {k}.
and
x b̄ y1k
B1 1
xB2 b̄2 y2k
.. .. ..
. . .
xB = = b̄ − (y k )xk = − xk , (5.6)
xBr b̄r yrk
.. .. ..
. . .
xBm b̄m ymk
where ylk , l = 1, . . . , m are the components of the vector y k .
Assume the current basic feasible solution is not degenerated, i.e. b̄ > 0. Now let us
analyse what will happen when we increase the value of xk .
(ii) For those yik ≤ 0, when xk increases, (5.6) implies that the component xBi will
increase, continuing to be positive.
68
(iii) For those yik > 0, increasing xk , the corresponding component xBi will decrease.
So we need to be increase xk but not too much. From (5.6), the first basic variable
dropping to zero corresponds to the minimum of b̄i /yik for positive yik . More precisely, we
can increase xk until it reaches the following value
b̄r b̄i
≡ min : yik > 0 . (5.7)
yrk 1≤i≤m yik
b̄r
Let us choose xk = yrk
. Under non-degeneracy, xk > 0. From (5.5) and fact that
zk − ck > 0, it follows that objective can be strictly improved, as xk increases from level
0 to br /yrk , a new improved feasible solution is obtained. Substituting xk given by (5.7)
into (5.6) yields the following:
xBi = b̄i − yyik b̄r , i = 1, 2, ..., m,
rk
b̄r (5.8)
xk = yrk ,
all other (x )′ s are zero.
j
From (5.8), we have xBr = 0, and hence at most m variables are positive. We want to
prove that this is also a BFS.
Proposition 5.10. (5.8) is the a basic feasible solution corresponding the basis
independent vectors. This is a consequence of the following lemma (see Linear Algebra):
Lemma 5.11. Let a1 , a2 , ..., an form a basis of Rn . Then any vector a can be represented
as
n
X
a= λ j aj .
j=1
Let’s replace aj by a, then the vectors a1 , ..., aj−1 , a, aj+1 , ..., an are linearly independent if
and only if, in the above representation, λj ̸= 0.
To apply this lemma to our situation, we notice that by the definition of yk , i.e.,
yk = B −1 ak , we have
with yrk ̸= 0. So if we replace the rth column of B by ak , the resulting vectors are still
linearly independent.
where xB1 , xB2 , ..., xk , ..., xBm is given by (5.8). In fact, the left-hand-side
y1k B1 b̄r k ymk
b̄1 − b̄r a + ... + a + ... + b̄m − b̄r aBm
yrk yrk yrk
= b̄1 − y1k xk aB1 + ... + xk ak + ... + b̄m − ymk xk aBm
X m
b̄j aBj + −b̄r aBr − y1k xk aB1 − ... − y(r−1)k xk aBr−1 + xk ak − ... − ymk xk aBm
=
j=1
m m
!
X X
= b̄j aBj + xk − yjk aBj + ak
j=1 j=1
= B b̄ + xk (ak − B(y k ))
= b + 0 = b,
[BT97, Chapter 3] and [DT97, Chapter 3] have similar material in this chapter that you
may find useful.
70
71
Initial Step
Main Step
xB = B −1 b = b̄,
xN = 0,
where R is the current set of indices associated with the nonbasic variables. Choose k
such that
zk − ck = max{zj − cj }.
j∈R
Step 4. This step deals with removing a variable in the basis and adding one from the
non basis. Let xk enter the basis.
4.2. Update the basis B where ak replaces aBr , update the index set R, and go to
step 1.
• or else we generate a new basic feasible solution if zk − ck > 0 and yk ̸≤ 0, and the
objective function strictly decreases.
Therefore, in the absence of degeneracy (i.e. xB > 0), the objective strictly decreases at
each iteration, and hence the basic feasible solution generated by the simplex method are
distinct. Since there is only a finite number of basic feasible solutions, the method would
stop in a finite number of steps with either an optimal solution or an unbounded optimal
value.
minimise f
BxB + N xN = b (6.2)
xB , xN ≥ 0.
We now reformulate this LP once again. Observe that (6.2) can be written as
xB + B −1 N xN = B −1 b.
So the LP is reformulated as
minimise f
xB + B −1 N xN = B −1 b,
xB , xN ≥ 0.
xB xN RHS
f 0 cTB B −1 N − cTN cTB B −1 b
xB I B −1 N B −1 b
The tableau contains all information of the linear programming that we need to proceed
with the simplex method:
• The top element in the RHS column gives us the objective value,
• The bottom element in the RHS column gives basic variable value B −1 b,
• The top element in the xN column gives us the values zj − cj for nonbasic variables,
or in other words the minus reduced cost,
We use the following language: the row labelled by f is called zeroth row. The rows
labelled by xBi are called i-th row.
To minimise f we need to go from one BFS to the next, therefore we need a scheme to
do the following:
In the next subsection, we show that all of the tasks can be simultaneously accomplished
by simple pivoting operations which are actually equivalent to a series of matrix elemen-
tary row operations on the matrix
1 cTB cTN 0
.
0 B N b
6.2.1 Pivoting:
If xk enters the basis and xBr leaves the basis, then pivoting on yrk includes the
following operations:
• For i = 1, ..., m and i ̸= r, update the ith row by adding to it −yik times the new
rth row.
• Update row zero by adding to it −(zk − ck ) times the new rth row.
75
The following two tables show the situation before and after pivoting operations.
zk −ck
(zj − cj )−
f 0 ··· yrk
··· 0 ··· ··· 0 ··· cTB b̄ − (zk − ck ) yb̄rkr
y (zk −ck )
− rj yrk
−y1k yrj y1k
xB 1 1 ··· yrk
··· 0 ··· y1j − yrk y1k ··· 0 ··· b̄1 − b̄
yrk r
.. .. .. .. .. .. ..
. . . . . . .
1 yrj b̄r
xk 0 ··· yrk
··· 0 ··· yrk
··· 1 ··· yrk
.. .. .. .. .. .. ..
. . . . . . .
−ymk yrj ymk
xBm 0 ··· yrk
··· 1 ··· ymj − y
yrk mk
··· 0 ··· b̄m − b̄
yrk r
• xk entered and xBr left the basis. This is illustrated on the left-hand-side of the
tableau by replacing xBr with xk .
• The right-hand-side gives the current values of the basic variables. The nonbasic
variables are kept zero.
• Suppose that the original columns for the new basic and nonbasic variables are B̂
and N̂ , respectively. The pivoting operation which is a sequence of elementary row
76
operations results in a new tableau that gives the new B̂ −1 N̂ under the nonbasic
variables, and updated set of zj − cj ’s for the new nonbasic variables, and the values
of the new basic variables and objective.
Initialization step:
Find an initial basic feasible solution with basis B. Form the following initial tableau.
xB xN RHS
f 0 cTB B −1 N − cTN cTB B −1 b
xB I B −1 N B −1 b
Main Step:
Step 2. Examine yk .
• Otherwise, go to Step 3
Step 3. Determine the index r by minimum ratio test to find the variable xBr to leave.
Step 4.
• Update the basic and nonbasic variables where xk enters the basis and xBr leaves
the basis, and repeat the main step.
77
Minimise x1 + x2 − 4x3
s.t. x1 + x2 + 2x3 ≤ 9,
x1 + x2 − x3 ≤ 2,
−x1 + x2 + x3 ≤ 4,
x1 , x2 , x3 ≥ 0.
subject to x1 + x2 + 2x3 + x4 = 9,
x1 + x2 − x 3 + x5 = 2,
−x1 + x2 + x3 + x6 = 4,
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0.
Since b = (9, 2, 4)T > 0, initial basis can be chosen as B = [a4 , a5 , a6 ] = I. so B −1 b = b̄ >
0. Thus, initial tableau is given by
Initial iteration
x1 x2 x3 x4 x5 x6 RHS
f −1 −1 4 0 0 0 0
x4 1 1 2 1 0 0 9
x5 1 1 −1 0 1 0 2
x6 −1 1 1∗ 0 0 1 4
• From this table, we see that x3 is a nonbasic variable, and z3 − c3 = 4 > 0. Thus,
the current basic feasible solution (x4 , x5 , x6 ) = (9, 2, 4) is not optimal. x3 will enter
the basis.
• Now we are going to determine which vector among x4 , x5 , x6 will leave the basis.
By checking y3 = (2, −1, 1)T , we note that it has two positive components. By the
minimum ratio test: min{ 29 , 41 } = 4. The vector x6 leaves the basis, and the marked
elements in the above table is the pivoting element.
78
• We do pivoting operations to get a new basic feasible solution. Thus the algorithm
proceeds to the next iteration.
Iteration 1.
x1 x2 x3 x4 x5 x6 RHS
f 3 −5 0 0 0 −4 −16
x4 3∗ −1 0 1 0 −2 1
x5 0 2 0 0 1 1 6
x3 −1 1 1 0 0 1 4
• Now we determine which vector among x4 , x5 , x3 will leave the basis. By checking
y1 = (3, 0, −1)T and the minimum ratio test indicates that x4 should leave the basis.
Iteration 2.
x1 x2 x3 x4 x5 x6 RHS
f 0 −4 0 −1 0 −2 −17
x1 1 −1/3 0 1/3 0 −2/3 1/3
x5 0 2 0 0 1 1 6
x3 0 2/3 1 1/3 0 1/3 13/3
Now, zj − cj ≤ 0 for all nonbasic variables, and thus an optimal solution is found and it
is given by
x∗1 = 1/3, x∗2 = 0, x∗3 = 13/3.
Example 6.3 (Beale 1955). Solve the following LP by using simplex method:
3 1
Minimise − x4 + 20x5 − x6 + 6x7
4 2
1
subject to x1 + x4 − 8x5 − x6 + 9x7 = 0,
4
1 1
x2 + x4 − 12x5 − x6 + 3x7 = 0,
2 2
x3 + x6 = 1,
x1 , . . . , x7 ≥ 0.
Initial tableau
x1 x2 x3 x4 x5 x6 x7 RHS
f 0 0 0 3/4 -20 1/2 -6 0
x1 1 0 0 1/4∗ -8 -1 9 0
x2 0 1 0 1/2 -12 -1/2 3 0
x3 0 0 1 0 0 1 0 1
Iteration 1. The maximum value of the minus reduced cost is 3/4, so k = 4. The
minimum ratio test is satisfied for r = 1, 2, for example choose r = 1. So we pivot with
respect to y14
80
x1 x2 x3 x4 x5 x6 x7 RHS
f -3 0 0 0 4 7/2 -33 0
x4 4 0 0 1 -32 -4 36 0
x2 -2 1 0 0 4∗ 3/2 -15 0
x3 0 0 1 0 0 1 0 1
Iteration 2. The maximum value of the minus reduced cost is 4, so k = 5. The minimum
ratio test is satisfied for r = 2, so we pivot with respect to y25
x1 x2 x3 x4 x5 x6 x7 RHS
f -1 -1 0 0 0 2 -18 0
x4 -12 8 0 1 0 8∗ -84 0
x5 -1/2 1/4 0 0 1 3/8 -15/4 0
x3 0 0 1 0 0 1 0 1
Iteration 3. Here we could pivot both at y46 and y56 , we choose y46 :
x1 x2 x3 x4 x5 x6 x7 RHS
f 2 -3 0 -1/4 0 0 3 0
x6 -3/2 1 0 1/8 0 1 -21/2 0
x5 1/16 -1/8 0 -3/64 1 0 3/16∗ 0
x3 3/2 -1 1 -1/8 0 0 21/2 1
Iteration 4.
x1 x2 x3 x4 x5 x6 x7 RHS
f 1 -1 0 1/2 -16 0 0 0
x6 2∗ -6 0 -5/2 56 1 0 0
x7 1/3 -2/3 0 -1/4 16/3 0 1 0
x3 -2 6 1 5/2 -56 0 0 1
Iteration 5.
81
x1 x2 x3 x4 x5 x6 x7 RHS
f 0 2 0 7/4 -44 -1/2 0 0
x1 1 -3 0 -5/4 28 1/2 0 0
x7 0 1/3∗ 0 1/6 -4 -1/6 1 0
x3 0 0 1 0 0 1 0 1
Iteration 6.
x1 x2 x3 x4 x5 x6 x7 RHS
f 0 0 0 3/4 -20 1/2 -6 0
x1 1 0 0 1/4 -8 -1 9 0
x2 0 1 0 1/2 -12 -1/2 3 0
x3 0 0 1 0 0 1 0 1
This tableau is exactly as the initial tableau. Therefore, the simplex method will cycle
endlessly. We observe that the basic feasible solution in each iteration is degenerate (but
not totally degenerate) and the objective function actually does not change.
It is this degeneracy that causes cycling. We see that throughout the process we were
faced several times by a non-unique choice of pivoting. There are several ways to avoid
cycling. The most commonly used method is to fix a rule to make choices when faced with
more that 1 possible pivoting candidate. The most common rule is Bland’s rule which is
formally stated as follows:
1. Choose the lowest-numbered (i.e., leftmost) nonbasic column with a negative re-
duced cost - in the tableau this means “choose the leftmost column with a positive
entry yil .
2. If there are more that one row that satisfy the minimum ratio test, choose the row
with the lowest-numbered column basic variable in it.
This means that in the previous example we should pivot w.r.t. y24 in the first
iteration.
Example 6.4. If Bland’s rule does not work, we can modify the rule and start again. For
example we can change the rule by saying that instead of the lowest-numbered nonbasic
column with a negative reduced cost we should choose the highest, or that instead of the
highest numbered column basic variable, we should choose the lowest.
82
subject to x1 ≤ 5,
4x1 + x2 ≤ 25,
2n x1 + 2n−1 x2 + . . . + 4xn−1 + xn ≤ 5n ,
x1 , . . . , xn ≥ 0.
The LP has n variables, n constraints and 2n extreme points. The simplex method,
starting from the origin, will go through each of the extreme points before obtaining the
optimal solution at (0, 0, . . . , 5n ). To illustrate this, we consider the Klee-Minty cube for
the case n = 3, that is
subject to x1 ≤ 5,
4x1 + x2 ≤ 25,
x1 , x2 , x3 ≥ 0.
It is easy to see that 4x1 + 2x2 + x3 ≤ 8x1 + 4x2 + x3 = 125 and the inequality becomes
equality when x1 = x2 = 0, x3 = 125. Thus the optimal value is 125 and is achieved
at (x1 , x2 , x3 ) = (0, 0, 125). Adding s1 , s2 , s3 as slack variables to obtain the following
83
problem
subject to x1 + s1 = 5,
4x1 + x2 + s2 = 25,
x1 , x2 , x3 , s1 , s2 , s3 ≥ 0.
x1 x2 x3 s1 s2 s3 RHS x1 x2 x3 s1 s2 s3 RHS
−f 4 2 1 0 0 0 0 −f 0 2 1 -4 0 0 -20
s1 1 0 0 1 0 0 0 x1 1 0 0 1 0 0 5
s2 4 1 0 0 0 0 0 s2 0 1 0 -4 1 0 5
s3 8 4 1 0 0 1 0 s3 0 4 1 -8 0 1 85
x1 x2 x3 s1 s2 s3 RHS x1 x2 x3 s1 s2 s3 RHS
−f 0 0 1 4 -2 0 -30 −f -4 0 1 0 -2 0 -50
x1 1 0 0 1 0 0 5 s1 0 0 0 1 0 0 5
x2 0 1 0 -4 1 0 5 x2 4 1 0 0 1 0 25
s3 0 0 1 8 -4 1 65 s3 -8 0 1 0 -4 1 25
x1 x2 x3 s1 s2 s3 RHS x1 x2 x3 s1 s2 s3 RHS
−f 4 0 0 0 2 -1 -75 −f 0 0 0 -4 2 -1 -95
s1 1 0 0 1 0 0 5 x1 1 0 0 1 0 0 5
x2 4 1 0 0 1 0 25 x2 0 1 0 -4 1 0 5
x3 -8 0 1 0 -4 1 25 x3 0 0 1 8 -4 1 65
x1 x2 x3 s1 s2 s3 RHS x1 x2 x3 s1 s2 s3 RHS
−f 0 -2 0 4 0 -1 -105 −f -4 -2 0 0 0 -1 -125
x1 1 0 0 1 0 0 5 s1 1 0 0 1 0 0 5
s2 0 1 0 -4 1 0 5 s2 4 1 0 0 1 0 25
x3 0 4 1 -8 0 1 85 x3 8 4 1 0 0 1 125
Thus the simplex method applied to the Klee-Minty cube for n = 3, starting at the origin,
goes through all of the 8 vertices before reaching the optimal value.
84
Remark 2. There are other algorithms for solving linear-programming problems. In par-
ticular, interior-point methods refer to class of methods for solving linear programming
problems that move through the interior of the feasible region such as Khachiyan’s ellip-
soidal algorithm and Karmarkar’s algorithm. This is in contrast to the simplex algorithm,
which finds an optimal solution by traversing the edges between vertices of the feasible re-
gion. Interior-point methods often have worst-case polynomial complexity; however, in
most practical applications, the simplex method is often very efficient.
The linear programming problem: Find a strongly-polynomial time algorithm which for
given matrix A ∈ Rm×n and b ∈ Rm decides whether there exists x ∈ Rn with Ax ≥ b.
Minimise −3x1 + x2
s.t. x1 + 2x2 ≤ 4,
−x1 + x2 ≤ 1,
x1 , x2 , x3 , x4 ≥ 0.
s.t. x1 − 2x2 ≤ 4,
−x1 + x2 ≤ 3,
x1 , x2 ≥ 0.
85
x1 x2 x3 x4 RHS
f 1 3 0 0 0
x3 1 −2 1 0 4
x4 −1 1∗ 0 1 3
x1 x2 x3 x4 RHS
f 4 0 0 −3 −9
x3 −1 0 1 2 10
x2 −1 1 0 1 3
Maximise −3x1 + x2
2x1 + x2 ≥ −2,
2x1 + x2 ≤ 8,
Degeneracy cases
When the right-hand-side vector b̄ is not positive (at least one of its components is zero),
we can still use the simplex method to solve the problem. The following is such an exam-
ple.
Example 6.9 (Investment problem). Someone has $ 20,000 to finance various invest-
ments. There are five categories of investments, each with an associated return and risk:
86
Their goal is to allocate the money to the categories so as to maximise the return, subject
to the following constraints:
(a) An average risk is no more than 6 (all averages taken over the invested money, not
over the savings).
(b) The amount in mortgages and personal loans combined should be no higher than the
amount in commercial.
Since there is no return from government securities, we do not consider such an invest-
ment. Denote
• x3 and x4 are the amounts invested in personal loans and savings, respectively.
x1 x2 x3 x4 x5 x6 RHS
f 5 7 9 0 0 0 −60, 000
x5 1 −1 1∗ 0 1 0 0
x6 −4 0 2 0 0 1 0
x4 1 1 1 1 0 0 20,000
x1 x2 x3 x4 x5 x6 RHS
f −4 16 0 0 −9 0 −60, 000
x3 1 −1 1 0 1 0 0
x6 −6 2∗ 0 0 −2 1 0
x4 0 2 0 1 −1 0 20,000
Replacing x6 by x2 , we have
x1 x2 x3 x4 x5 x6 RHS
f 44 0 0 0 7 −8 −60, 000
x3 −2 0 1 0 0 1/2 0
x2 −3 1 0 0 −1 1/2 0
x4 6∗ 0 0 1 1 −1 20,000
Replacing x4 by x1 , we have
x1 x2 x3 x4 x5 x6 RHS
f 0 0 0 −22/3 −1/3 −2/3 −620, 000/3
x3 0 0 1 1/3 1/3 1/6 20,000/3
x2 0 1 0 1/2 −1/2 0 10,000
x1 1 0 0 1/6 1/6 −1/6 10,000/3
(P1 ) is −620, 000/3, and thus the maximum return for the original problem (P2 ) is
620, 000
f∗ = .
3
Suppose that we get the final tableau (optimal tableau) of simplex method. What in-
formation can be obtained from this final tableau? and how this information can be
used?
x1 x2 x3 x4 x5 x6 RHS
f 0 0 0 −22/3 −1/3 −2/3 −620, 000/3
x3 0 0 1 1/3 1/3 1/6 20,000/3
x2 0 1 0 1/2 −1/2 0 10,000
x1 1 0 0 1/6 1/6 −1/6 10,000/3
f ∗ = −620, 000/3.
89
At each iteration, the current B −1 can be obtained immediately from the current tableau,
and the inverse of the optimal basis can be found in optimal tableau.
Assume that the original tableau has an identity matrix. The process of reducing the
basis matrix B of the original tableau to an identity matrix in the current tableau, is
equivalent to premultiplying rows 1 through m of the original tableau by B −1 to produce
the current tableau. This also converts the identity matrix of the original tableau to B −1 .
Therefore, B −1 can be extracted from the current tableau as the submatrix in rows 1
through m under the original identity columns.
subject to x1 − x2 + x3 ≤ 0,
−4x1 + 2x3 ≤ 0,
x1 + x2 + x3 + x4 = 20, 000,
x1 , x2 , x3 , x4 ≥ 0.
subject to x1 − x2 + x3 + x5 = 0,
−4x1 + 2x3 + x6 = 0,
x1 + x2 + x3 + x4 = 20, 000,
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0.
x1 x2 x3 x4 x5 x6 RHS
f 8 10 12 3 0 0 0
x5 1 −1 1 0 1 0 0
x6 −4 0 2 0 0 1 0
x4 1 1 1 1 0 0 20,000
90
x1 x2 x3 x4 x5 x6 RHS
f 0 0 0 −22/3 −1/3 −2/3 −620, 000/3
x3 0 0 1 1/3 1/3 1/6 20,000/3
x2 0 1 0 1/2 −1/2 0 10,000
x1 1 0 0 1/6 1/6 −1/6 10,000/3
What is B and B −1 ?
1 −1 1 1/3 1/6 1/3
3 2 1
B = [a , a , a ] = 2 0 −4 , B −1 = −1/2 1/2 .
0
1 1 1 1/6 −1/6 1/6
Example 6.12. The following simplex tableau shows the optimal solution of a linear
programming. It is known that x4 and x5 are the slack variables in the first and second
constraints of the original problem. The constraints of the original problem are of the ‘≤’
type.
x1 x2 x3 x4 x5 RHS
f 0 −2 0 −3 −2 −35
x3 0 1/4 1 1/2 0 5/2
x1 1 −1/2 0 −1/6 1/3 5/2
The initial identity basis corresponds to x4 and x5 . So in the optimal tableau, the
inverse of the optimal basis B is given by
1/2 0
B −1 = .
−1/6 1/3
In many situations, from the optimal tableau, we may recover the original LP problem.
Example 6.13. (Same example as Example 6.12) The following simplex tableau shows
the optimal solution of a linear programming. It is knows that x4 and x5 are the slack
91
variables in the first and second constraints of the original problem. The constraints are
of the ‘≤’ type
x1 x2 x3 x4 x5 RHS
f 0 −2 0 −3 −2 −35
x3 0 1/4 1 1/2 0 5/2
x1 1 −1/2 0 −1/6 1/3 5/2
4. Determine the cost coefficient c of the objective - this will be done by using the dual
simplex method.
Chapter 7
In this chapter, we study another extension/variant of the simplex method, the dual
simplex method. This method is useful and necessary in, for instances, the following
situations.
• However, in these same instances, it is often possible to find a starting basis, which
is not necessary to be primal feasible, but which is dual feasible (i.e., all zj − cj ≤ 0
for minimization problem).
• The dual simplex method is a variant of the simplex method that would produce a
series of simplex tableau that maintain dual feasibility and complementary slackness
and strive toward primal feasibility.
• The dual simplex method solves the dual problem directly on the (primal) simplex
tableau. At each iteration we move from a basic feasible solution of the dual problem
to an improved basic feasible solution until optimality of the dual (and also the
primal) is reached, or else until we conclude that the dual is unbounded and that
the primal is infeasible.
92
93
Minimise cT x
subject to Ax ≥ b
x ≥ 0.
Maximise bT w
subject to AT w ≤ c
w ≥ 0.
Assume we run the simplex method for a few iterations and reach an optimal tableau:
where we assume that for the indices i such that xi is one of the base variables, say xBl ,
we take the column entries yji to be all 0 except yBl i that is equal to 1. Similarly for those
columns, the values of zi − ci are set to be 0. Because the solution is optimal, the slack
variables can be chosen to be 0 (they don’t contribute to the final cost), so for simplicity
assume that the optimal BFS contains the slack variables within the non basic part.
From this optimal tableau, we may also get the optimal solution to the dual problem:
Theorem 7.1. At optimality of the primal minimisation problem in the canonical form,
w̄T := cTB B −1
94
is an optimal solution to the dual problem and has the following components
wi = cn+i − zn+i , i = 1, . . . , m.
zj − cj ≤ 0, ∀j ∈ R,
where R is the set of indices corresponding to the non-basic variables. Since, by definition
of z and w̄
zj − cj = cTB B −1 aj − cj = w̄T aj − cj = [AT w̄]j − cj ,
we have that AT w̄ ≤ c if and only if the tableau is optimal. Moreover, because for the
slack variables the columns ai+n are given by the canonical vectors, namely ai+n = −ei ,
we have
zn+i − cn+i = w̄T an+i − cn+i = −w̄T ei − 0 = −wi ,
So indeed w̄ is feasible for the dual problem if and only if the tableau is optimal.
Moreover, w̄ is optimal for the dual problem because
Remark 4. Note that in the previous theorem 7.1, we have assumed that the primal is
in canonical form, namely, the constraints are in the form Ax ≥ b. If the constraints are
the other way around, namely Ax ≤ b, then the components of wT := cTB B −1 are given by
wi = zn+i − cn+i .
Therefore, when the optimal solution is attained, from the final tableau of the simplex
method, we may get both the primal and dual optimal solutions.
Example 7.2. (Same example as Example 6.13. By using Remark 4, we extract the
optimal solution of the dual problem by looking at the 0-row components corresponding to
x4 and x5 :
w∗ = (−3, −2)T .
95
Now, since, thanks to the same theorem, (−3, −2) = cTB B −1 , we calculate
2 0
cTB = (−3, −2) = (−8, −6).
1 3
To compute cN , we use the fact that in the non basic part of the 0-th row, we have the
values of cTB B −1 N − cTN . In this example, the only component of cN that we need to find
is the second, as the cost of the slack variables is 0. So we have
1/2
−2 = cTB B −1 a2 − c2 = (−3, −2) − c2
−5/4
The data in the simplex tableaux might be lost or corrupted, in which case we need
to recover these data.
Example 7.3. The following is the optimal tableau of simplex method for a given min-
imisation problem. The objective is to minimise −2x1 + 3x2 , and the slack variables are
x3 and x4 corresponding to the first and second inequalities, respectively. The constraints
are of the ‘≤’ type.
x1 x2 x3 x4 RHS
f b −1 f g -6
x3 c 0 1 1/5 4
x1 d e 0 2 a
Introducing slack variables in both primal and dual we can state the dual problems
as:
Primal Dual
Minimise cT x Maximise bT w
subject to Ax + t = b subject to AT w + s = c
x ≥ 0 w ≥ 0.
96
Thanks to the complementary slackness condition, we have that if (x̄T , t̄T ) and (w̄T , s̄T )
are optimal, then
x̄T s̄ = w̄T t̄ = 0.
2. The variables of the primal problem correspond to the dual slack variables.
It has been proved [Gre94] that a primal-dual pair of optimal solutions is unique if
and only if it is a strictly complementary pair of basic solutions (see Theorem 4.22 for
the definition of a strictly complementary solution). However, in general, it may happen
that both the primal and the dual linear programming problems have multiple optimal
solutions. For example, consider the following LP problem
Minimise x1 + x2 + 2x3
subject to x1 + x3 = 1,
x2 + x3 = 1,
x3 ≤ 0,
x1 , x2 ≥ 0.
This problem has multiple solution, for instance (1, 1, 0) and (0, 1, 1). In fact, any feasible
point is optimal. The dual problem is
Maximise y1 + y2
subject to y1 ≤ 1,
y2 ≥ 1,
y1 + y2 ≤ 2.
The dual also has multiple solutions, for example (1, 1) and (0, 2).
97
min{cx : Ax = b, x ≥ 0},
and suppose that we have a dual feasible point, i.e., the initial table of simplex method
is given as follows:
x1 x2 ··· xn RHS
f z1 − c1 z2 − c2 ··· zn − cn cTB b
xB 1 y11 y12 ··· y1n b̄1
xB 2 y21 y22 ··· y2n b̄2
.. .. .. .. .. ..
. . . . . .
xBm ym1 ym2 ··· ymn b̄m
Where
zj − cj ≤ 0, i = 1, ...., n.
Case 1. If b̄ ≥ 0, then the current point is already optimal to the linear program.
Case 2. If b̄r < 0 for some r, and the rth row vector is nonnegative, i.e., yrj ≥ 0, for
all j = 1, ..., n, then the dual is unbounded (and hence the primal problem is infeasible).
To see why this is true, we need to go back to section 5.2.1. There we saw that the primal
problem in standard form can be reformulated as in (5.4) :
X
Minimise f = f0 − (zj − cj )xj
j∈R
X
j
subject to (y )xj ≤ b̄ (7.1)
j∈R
xj ≥ 0, j ∈ R,
where R is the set of indices belonging to the non basis. The constraints can be written
in matrix form as
X
(y j )xj = B −1 N xN ≤ b̄,
j∈R
98
Maximise b̄T wN
subject to N T B −T wN ≥ c − z (7.2)
w ≥ 0,
where we denote by wN the dual variable to xN . Note that the entries of the matrix
N T B −T are all strictly positive, so wN is unbounded.
Case 3. If b̄r < 0 for some r, and there exists at least one element yrj < 0 in the rth
row. In this case, let xBr leave, and use the following minimum ratio test to determine
the variable to enter:
zk − ck zj − cj
= min : yrj < 0 .
yrk yrj
Thus, after a pivoting, the dual simplex method moves from one dual basic feasible point
to another one with improved function value. Under non-degeneracy assumption, the al-
gorithm terminates after finite iterations and finds the primal and dual optimal solutions.
Initialization Step: Find a basis B of the primal such that zj − cj ≤ 0 for all j.
99
Main Step:
Step 2. If yrj ≥ 0 for all j, stop, the dual is unbounded (and hence the primal is
infeasible). Otherwise, select the pivot column k by the following minimum ratio test:
zk − ck zj − cj
= min : yrj < 0 .
yrk yrj
s.t. x1 + 2x2 + x3 ≥ 3,
2x1 − x2 + 3x3 ≥ 4,
x1 , x2 , x3 ≥ 0.
Multiplying both sides of the inequalities by −1 and adding slack variables x4 and x5 , we
may form the first table as follows:
x1 x2 x3 x4 x5 RHS
f −2 −3 −4 0 0 0
x4 −1 −2 −1 1 0 −3
x5 −2∗ 1 −3 0 1 −4
Since b2 = −4 < 0, x5 leaves the basis. Which one to enter the basis? By minimum
ratio test,
z1 − c1 z3 − c3 −2 −4
min{ , } = min{ , } = min{1, 4/3} = 1
y21 y23 −2 −3
which indicates that x1 enters the basis. So after pivoting operations, we get the next
tableau.
100
x1 x2 x3 x4 x5 RHS
f 0 −4 −1 0 −1 4
x4 0 −(5/2)∗ 1/2 1 −1/2 −1
x1 1 −1/2 3/2 0 −1/2 2
Since b̄1 = −1 < 0, x4 must leave the basis. By the minimum ratio test, x2 enters the
basis. After pivoting, we get the optimal tableau
x1 x2 x3 x4 x5 RHS
f 0 0 −9/5 −8/5 −1/5 28/5
x2 0 1 −1/5 −2/5 1/5 2/5
x1 1 0 7/5 −1/5 −2/5 11/5
Sensitivity Analysis
(non-examinable)
• In most practical applications, some of the problem data are uncertain or cannot be
known exactly. In this situation, the problem data are estimated. It is important to
be able to find the new optimal solution of the problem as other estimates of some
of the data become available, without the expensive task of resolving the problem
from scratch.
• Furthermore, in many situations the constraints are not very rigid. For example,
a constraint may reflect the availability of some resources. This availability can
be increased by extra purchase, and the like. It is desirable to examine the effect
of relaxing some of the constraints on the value of the optimal objective without
having to resolve the problem.
[BT97, Chapter 5] and [DT97, Chapter 7] have similar material in this chapter that you
may find useful.
101
102
min{cx : Ax = b, x ≥ 0}.
We shall describe how to make use of the optimality conditions in order to find a new
optimal solution, if some of the problem data change, without resolving the problem from
scratch. In particular, the following variations in the problem will be considered.
Changes of the cost vector c will change the cost row of the final tableau, that is, the
dual feasibility may be affected (lost). Suppose ck is changed to c′k . This includes two
cases:
Case I
• For j ̸= k, the reduced cost coefficient zj − cj is not changed, and it satisfies that
zj − cj ≤ 0.
103
• For j = k, we have
zk − c′k = (zk − ck ) + (ck − c′k ).
Thus, if ck is increased, i.e., if c′k ≥ ck , then we still have zk − c′k ≤ 0, the current
solution remains to be optimal. More general, if c′k ≥ zk , the solution is no change.
Otherwise, zk − c′k > 0, in which case xk must be introduced into the basis and the
simplex method is continued as usual.
Minimise −2x1 + x2 − x3
s.t. x1 + x2 + x3 ≤ 6,
−x1 + 2x2 ≤ 4,
x1 , x2 , x3 ≥ 0.
x1 x2 x3 x4 x5 RHS
f 0 −3 −1 −2 0 −12
x1 1 1 1 1 0 6
x5 0 3 1 1 1 10
and all other zj − cj are unaffected. Hence x2 enters the basis to replace x5 in the basis.
So we get the following table
x1 x2 x3 x4 x5 RHS
f 0 1 −1 −2 0 −12
x1 1 1 1 1 0 6
x5 0 3∗ 1 1 1 10
Continue the simplex iteration, and find the new optimal solution as follows.
104
x1 x2 x3 x4 x5 RHS
f 0 0 −4/3 −7/3 −1/3 −46/3
x1 1 0 2/3 2/3 −1/3 8/3
x2 0 1 1/3 1/3 1/3 10/3
So the solution of the changed LP is (x∗1 , x∗2 ) = (8/3, 10/3), and the optimal value is
−46/3.
Case II
In this case xk is basic, for example xk = xB1 . The change of c to c′ changes the value
of z to z ′ = c′B T B −1 N and hence
T
zj′ −cj = c′B B −1 aj −cj = cTB B −1 aj −cj +(0, . . . , 0, c′B1 −cB1 , 0, . . . , 0)y j = zj −cj +(c′B1 −cB1 )yB1 j .
This means that we can just update the 0-th row by replacing zj − cj with zj − cj + (c′B1 −
cB1 )yB1 j for j non basic, while all entries corresponding to the basic variables remain equal
to 0. The new objective function is also changed to
T
c′B B −1 b = cB T B −1 b + (c′B1 − cB1 )b̄B1 .
Example 8.2. In the LP case as in Exercise 8.1, imagine to change the cost c1 to c′1 = −4,
so that c′1 − c1 = −2. The the optimal tableau is updated to:
x1 x2 x3 x4 x5 RHS
f 0 −5 −3 −4 0 −24
x1 1 1 1 1 0 6
x5 0 3 1 1 1 10
We see that this is no longer an optimal solution as we have a positive number in the 0-th
row. We need to run the simplex method to find an optimal solution.
Minimise −2x1 + x2 − x3
s.t. x1 + x2 + x3 ≤ 6,
−x1 + 2x2 ≤ 4,
x1 , x2 , x3 ≥ 0.
x1 x2 x3 x4 x5 RHS
f 0 −3 −1 −2 0 −12
x1 1 1 1 1 0 6
x5 0 3 1 1 1 10
i) Suppose that the right-hand-side vector b = (6, 4)T is replaced by b′ = (5, 4)T . Check
if this change will affect the current optimal solution of the problem.
Solutions: the initial identity basis corresponds to x4 and x5 . So in the optimal tableau,
the inverse of the optimal basis B is given by
1 0
B −1 = .
1 1
Suppose that the RHS vector b = (6, 4)T is replaced by b′ = (5, 4)T . We compute
1 0 5 5
B −1 b′ = = > 0.
1 1 4 9
106
Thus the current basis is still optimal. The optimal solution become
x1 5
xB = = B −1 b′ = .
x5 9
Now suppose that the RHS vector is changed to b′ = (−3, 4)T . We compute
1 0 −3 −3
B −1 b′ = = .
1 1 4 1
Since B −1 b′ has a negative entry, the current basis becomes infeasible. The dual simplex
method implies that the dual problem is unbounded. Hence the changed problem is infea-
sible. For this specific case, we can deduce that the changed problem is infeasible directly
since the first constraint is replaced by
x1 + x2 + x3 ≤ −3,
Suppose that a new activity xn+1 with cost cn+1 and consumption column an+1 is con-
sidered for possible production. Without resolving the problem, we can easily determine
whether producing xn+1 is worthwhile. First calculate zn+1 − cn+1 .
• If zn+1 − cn+1 > 0, then xn+1 is introduced into the basis and the simplex method
continues to find the new optimal solution.
107
Minimise −2x1 + x2 − x3
s.t. x1 + x2 + x3 ≤ 6,
−x1 + 2x2 ≤ 4,
x1 , x2 , x3 ≥ 0.
x1 x2 x3 x4 x5 RHS
f 0 −3 −1 −2 0 −12
x1 1 1 1 1 0 6
x5 0 3 1 1 1 10
Let’s find new optimal solution if a new activity x6 ≥ 0 with c6 = 1 and a6 = (−1, 2)T is
introduced.
From the optimal tableau, we find that the the inverse of the optimal basis B is
1 0
B −1 = .
1 1
We compute
1 0 −1
z6 − c6 = cTB B −1 a6 − c6 = −2 0 − 1 = 1 > 0.
1 1 2
1 0 −1 −1
y6 = B −1 a6 = = .
1 1 2 1
We incorporate this information into the last optimal simplex tableau:
x1 x2 x3 x4 x5 x6 RHS
f 0 −3 −1 −2 0 1 −12
x1 1 1 1 1 0 -1 6
x5 0 3 1 1 1 1∗ 10
We continue using the simplex method: x6 enters the basis and x5 leaves. By pivoting
around the pivoting entry, which is marked with a star on the above tableau, we obtain
the following tableau
108
x1 x2 x3 x4 x5 x6 RHS
f 0 −6 −2 −3 -1 0 −22
x1 1 4 2 2 1 0 16
x6 0 3 1 1 1 1 10
This is an optimal solution. Thus the optimal solution to the changed problem is (x∗1 , x∗2 , x∗3 , x∗6 ) =
(16, 0, 0, 10), the optimal value is −22.
• If the optimal solution to the original problem satisfies the added constraint, it is
then obvious that the point is also an optimal solution to the new problem.
• If the optimal solution to the original problem does not satisfy the new constraint,
i.e., if the constraint cut away the optimal point, we can use the dual simplex method
to find the new optimal solution.
Minimise −2x1 + x2 − x3
s.t. x1 + x2 + x3 ≤ 6,
−x1 + 2x2 ≤ 4,
x1 , x2 , x3 ≥ 0.
x1 x2 x3 x4 x5 RHS
f 0 −3 −1 −2 0 −12
x1 1 1 1 1 0 6
x5 0 3 1 1 1 10
−x1 + 2x3 ≥ 2.
109
Clearly the optimal point (x1 , x2 , x3 ) = (6, 0, 0) does not satisfy this constraint. The
constraint −x1 + 2x2 ≥ 2 is rewritten as
x1 − 2x3 + x6 = −2,
where x6 is a nonnegative slack variable. This row is added to the optimal tableau of the
above problem to obtain the following tableau.
x1 x2 x3 x4 x5 x6 RHS
f 0 −3 −1 −2 0 0 −12
x1 1 1 1 1 0 0 6
x5 0 3 1 1 1 0 10
x6 1 0 −2 0 0 1 −2
Multiply row 1 by −1 and add to row 3 in order to restore column x1 to a unit vector.
x1 x2 x3 x4 x5 x6 RHS
f 0 −3 −1 −2 0 0 −12
x1 1 1 1 1 0 0 6
x5 0 3 1 1 1 0 10
x6 0 −1 −3∗ −1 0 1 −8
The dual simplex method can then be applied to the above tableau. By minimum ratio test
of the dual simplex method, we see that x3 enters the basis to replace x6 . Thus, we get the
following tableau.
x1 x2 x3 x4 x5 x6 RHS
f 0 −8/3 0 −5/3 0 −1/3 −28/3
x1 1 2/3 0 2/3 0 1/3 10/3
x5 0 8/3 0 2/3 1 1/3 22/3
x3 0 1/3 1 1/3 0 −1/3 8/3
In summary, when a constraint is added, restore the simplex tableau first, and then con-
tinue simplex iterations (by using dual simplex method) until new solution is found.
110
The above example introduces the idea of how to handle the case when the optimal
solution to the original problem does not satisfy the new constraint. Suppose that B is
the optimal basis before the constraint
am+1 x ≤ bm+1
is added. As seen is section 6.2, the corresponding system is given by the following system
f + (cB B −1 N − cN )xN = cB B −1 b,
xB + B −1 N xN = B −1 b.
am+1
B xB + am+1
N xN + xn+1 = bm+1 .
xB = B −1 b
xN = 0
The only possible violation of optimality of the new problem is the sign of bm+1 −am+1
B B −1 b.
If
bm+1 − am+1
B B −1 b ≥ 0,
bm+1 − am+1
B B −1 b < 0,
The simplex method assumes that an initial basic feasible solution (BFS) is at hand. In
many cases, such a BFS is not readily available. There are some work may be needed to
get the simplex method started.
1.
x1 + 2x2 ≤ 4,
−x1 + x2 ≤ 1,
x1 , x2 ≥ 0.
2.
x1 + x2 + x3 ≤ 6,
x1 , x2 , x3 ≥ 0.
For the first group of constraints, an initial basis can be found without any difficulty.
However, the second one is not. Some extra work is needed to get the initial basis.
In this chapter we will study two procedures that can be used to construct initial basis:
111
112
At the end of this chapter, students should be able to apply the two-phase method
and the big-M method to solve some LPs.
Minimise cT x
subject to Ax ≤ b
x ≥ 0.
with b ≥ 0. The most convenient way to find a BFS is to introduce some slack variables
s so that the problem is re-formulated as
Minimise cT x
subject to Ax + s = b
x, s ≥ 0.
Now we pick x = 0 and s = b as BFS. This is feasible because b ≥ 0. Since the basic
variables coincide with the slack variables, z = 0 and f = 0. So our initial tableau is very
simple:
x s RHS
f −cT 0 0
s A 1 b
However, in general, we may not be that lucky. In particular we may have that we
don’t have enough slack variables to construct a BFS, or that the non all component of b
are non negative and/or A does not have an identity sub–matrix.
113
x1 + x2 + x3 ≤ 6,
x2 , x3 ≥ 0.
Note that x1 is unrestricted, therefore we introduce x′1 , x′′1 ≥ 0, and put x1 = x′1 − x′′1 . We
also introduce slack variables x4 , x5 :
x′1 − x′′1 + x2 + x3 + x4 = 6,
x′1 , x′′1 , x2 , x3 , x4 , x5 ≥ 0.
We see that the new constraint matrix does not contain an identity 2 × 2 sub-matrix and
therefor an obvious feasible basis cannot be extracted.
Minimise cT x
subject to Ax = b
x ≥ 0,
If A has a m × m identity sub-matrix, then we know a quick way to select out BFS.
Otherwise, we may face some difficulty. In this case, we apply the 2-phase method. This
consists in adding further artificial variables xa to the constraints in order to force the
appearance of an identity sub-matrix. Specifically we modify the constraints as follows,
Ax + xa = b, x, xa ≥ 0,
so that the new constraint matrix is [A, 1] and we have an immediate BFS in the form
x = 0, xa = b.
114
Note the difference between slack variables and artificial variables: slack variables are
introduced to transform problems in non-standard form to problems in standard form.
The artificial variables are introduced after having reduced the problem in standard form
and are not legitimate variables.
The two-phase method consist in a Phase I problem which is used to find a basic
feasible solution for the original constraints, then we can initiate Phase II wherein the
Simplex Method is applied to solve the original linear programming problem.
Phase 1. Solve the following linear programming problem starting with the basic
feasible solution x = 0 and xa = b :
Minimise eT xa
subject to Ax + xa = b,
x, xa ≥ 0,
• Otherwise, this means we have reached optimality in the form (x̄, 0). By construc-
tion, x̄ is feasible for the original problem. Lets split it in basic and nonbasic
variables be xB and xN . This step is simple: put all non zero variables in the basic
and then add enough 0 to reach a total of m variables.
Phase II. Having chosen xB , we now have B and we can solve the following linear
programming problem starting with the basic feasible solution xB = B −1 b and xN = 0 :
Minimise cB x B + cN x N
subject to xB + B −1 N xN = B −1 b,
xB , xN ≥ 0.
Ax + xa = b, x ≥ 0, xa ≥ 0.
This gives an immediate basic feasible solution of the new system, namely xa = b and
x = 0. Even though we now have a starting basic feasible solution and the simplex method
can be applied, we have in fact changed the problem. In order to get back to our original
problem, we must force these artificial variables to zero, because Ax = b if and only if
Ax + xa = b with xa = 0. In other words, artificial variables are only a tool to get the
simplex method started, however, we must guarantee that these variables will eventually
drop to zero.
Minimise x1 − 2x2
subject to x1 + x2 ≥ 2,
−x1 + x2 ≥ 1,
x2 ≤ 3,
x1 , x2 ≥ 0.
Minimise x1 − 2x2
subject to x1 + x2 − x3 = 2,
−x1 + x2 − x4 = 1,
x2 + x5 = 3,
x1 , x2 , x3 , x4 , x5 ≥ 0.
Phase I
116
Minimise x6 + x7
subject to x1 + x2 − x3 + x6 =2
−x1 + x2 − x4 + x7 = 1
x2 + x5 =3
x1 , x2 , x3 , x4 , x5 ≥ 0.
x1 x2 x3 x4 x5 x6 x7 RHS
f0 0 2 −1 −1 0 0 0 3
x6 1 1 −1 0 0 1 0 2
x7 −1 1* 0 −1 0 0 1 1
x5 0 1 0 0 1 0 0 3
Since z2 −c2 = 2 > 0, x7 is replaced by x2 , after pivoting operations, we have the following
table
x1 x2 x3 x4 x5 x6 x7 RHS
f0 2 0 −1 1 0 0 −2 1
x6 2* 0 −1 1 0 1 −1 1
x2 −1 1 0 −1 0 0 1 1
x5 1 0 0 1 1 0 −1 2
Replacing x6 by x1 , we have
x1 x2 x3 x4 x5 x6 x7 RHS
f0 0 0 0 0 0 −1 −1 0
x1 1 0 −1/2 1/2 0 1/2 −1/2 1/2
x2 0 1 −1/2 −1/2 0 1/2 1/2 3/2
x5 0 0 1/2 1/2 1 −1/2 −1/2 3/2
which is the optimal tableau of the Phase I. There is no artificial variable in the optimal
basis of the first phase. We have a starting basic feasible solution for proceeding to the
second phase.
117
Phase II
Removing the artificial variables from the table, we do not consider it any more.
Replace the first row by the coefficients of the equation f − x1 + 2x2 = 0 since the original
objective is f = x1 − 2x2 .
x1 x2 x3 x4 x5 RHS
f −1 2 0 0 0 0
x1 1 0 −1/2 1/2 0 1/2
x2 0 1 −1/2 −1/2 0 3/2
x5 0 0 1/2 1/2 1 3/2
x1 x2 x3 x4 x5 RHS
f 0 0 1/2 3/2 0 −5/2
x1 1 0 −1/2 (1/2)* 0 1/2
x2 0 1 −1/2 −1/2 0 3/2
x5 0 0 1/2 1/2 1 3/2
Since z4 − c4 = 3/2 > 0, x4 is the entering variable. By minimum ratio test, x1 is the
leaving variable.
x1 x2 x3 x4 x5 RHS
f −3 0 2 0 0 −4
x4 2 0 −1 1 0 1
x2 1 1 −1 0 0 2
x5 −1 0 1* 0 1 1
f x1 x2 x3 x4 x5 RHS
f 1 −1 0 0 0 −2 −6
x4 0 1 0 0 1 1 2
x2 0 0 1 0 0 1 3
x3 0 −1 0 1 0 1 1
118
Note that phase I moves from the infeasible point (0,0) to the infeasible point (0,1),
and finally to the feasible point (1/2, 3/2). From this extreme point, phase II moves to
the feasible point (0,2) and finally to the optimal point (0, 3).
• If all artificial variables are out of the basis, remove from the last form the artificial
and construct the initial simplex step since there is a basic feasible solution at hand,
and hence proceed to Phase II.
• If some artificial variables are in the basis at zero level, we have two ways to continue.
We may first eliminate the columns corresponding to the nonbasic artificial variables
of Phase I, and then proceed to phase II directly to continue to replace those left
artificial variables without increasing the original objective values.
s.t. x1 + 2x2 + x3 ≥ 3,
2x1 − x2 + 3x3 ≥ 4,
x1 , x2 , x3 ≥ 0.
119
First, we introduce slack variables x4 and x5 , to write the problem in the standard form
s.t. x1 + 2x2 + x3 − x4 = 3,
2x1 − x2 + 3x3 − x5 = 4,
x1 , ..., x5 ≥ 0.
Phase I: now we introduce artificial variables x6 and x7 and consider the following phase-I
problem
Minimise x6 + x7
s.t. x1 + 2x2 + x3 − x4 + x6 = 3,
2x1 − x2 + 3x3 − x5 + x7 = 4,
x1 , ..., x7 ≥ 0.
x1 x2 x3 x4 x5 x6 x7 RHS
f0 0 0 0 0 0 −1 −1 0
x6 1 2 1 -1 0 1 0 3
x7 2 1 3 0 -1 0 1 4
x1 x2 x3 x4 x5 x6 x7 RHS
f0 3 1 4 -1 -1 0 0 7
x6 1 2 1 -1 0 1 0 3
x7 2 1 3∗ 0 -1 0 1 4
x3 enters, x7 leaves
x1 x2 x3 x4 x5 x6 x7 RHS
f0 1/3 7/3 0 -1 1/3 0 -4/3 5/3
x6 1/3 7/3 0 -1 1/3 1 -1/3 5/3
x3 2/3 -1/3 1 0 -1/3 0 1/3 4/3
x2 enters, x6 leaves.
120
x1 x2 x3 x4 x5 x6 x7 RHS
f0 0 0 0 0 0 -1 -1 0
x2 1/7 1 0 -3/7 1/7 3/7 -1/7 5/7
x3 5/7 0 1 -1/7 -2/7 1/7 3/7 11/7
This is an optimal solution to the phase I problem. The artificial variables x6 and x7 are
not basic variable at optimality. Thus we remove columns x6 and x7 and move to phase
II.
Phase II: we replace the original cost function to obtain the tableau:
x1 x2 x3 x4 x5 RHS
f -2 -3 -4 0 0 0
x2 1/7 1 0 -3/7 1/7 5/7
x3 5/7 0 1 -1/7 -2/7 11/7
x1 x2 x3 x4 x5 RHS
f 9/7 0 0 -13/7 -5/7 59/7
x2 1/7 1 0 -3/7 1/7 5/7
x3 5/7∗ 0 1 -1/7 -2/7 11/7
x1 enters, x3 leaves.
x1 x2 x3 x4 x5 RHS
f 0 0 −9/5 −8/5 −1/5 28/5
x2 0 1 −1/5 −2/5 1/5 2/5
x1 1 0 7/5 −1/5 −2/5 11/5
s.t. x1 + x2 + x3 = 6,
−x1 + x2 + 2x3 = 4,
x3 ≤ 2,
x1 , x2 , x3 ≥ 0
First, we introduce the slack variable x4 to write the problem in the standard form
s.t. x1 + x2 + x3 = 6,
−x1 + x2 + 2x3 = 4,
x3 + x4 = 2,
x1 , x2 , x3 , x4 ≥ 0.
Phase I:
Minimise x5 + x6 + x7
s.t. x1 + x2 + x3 + x5 = 6,
−x1 + x2 + 2x3 + x6 = 4,
x3 + x4 = 2,
x1 , x2 , x3 , x4 , x4 , x6 , x7 ≥ 0.
x1 x2 x3 x4 x5 x6 x7 RHS
f 0 4 6 0 0 0 0 20
x5 1 1 1 0 1 0 0 6
x6 −1 1 2 0 0 1 0 4
x7 0 2 3 0 0 0 1 10
x4 0 0 1 1 0 0 0 2
122
Pivot at y43 :
x1 x2 x3 x4 x5 x6 x7 RHS
f 0 4 0 −6 0 0 0 8
x5 1 1 0 −1 1 0 0 4
x6 −1 1 0 −2 0 1 0 0
x7 0 2 0 −3 0 0 1 4
x3 0 0 1 1 0 0 0 2
Pivot at y62 :
x1 x2 x3 x4 x5 x6 x7 RHS
f 4 0 0 2 0 −4 0 8
x5 2 0 0 1 1 −1 0 4
x2 −1 1 0 −2 0 1 0 0
x7 2 0 0 1 0 −2 1 4
x3 0 0 1 1 0 0 0 2
Pivot at y51 :
x1 x2 x3 x4 x5 x6 x7 RHS
f 0 0 0 0 −2 −2 0 0
x1 1 0 0 1/2 1/2 −1/2 0 2
x2 0 1 0 −3/2 1/2 1/2 0 2
x7 0 0 0 0 −1 −1 1 0
x3 0 0 1 1 0 0 0 2
We have reached optimality. However, we see that we still have an artificial variable in
the basis. This is not a problem because its value is 0, therefore we can eliminate the
corresponding row all together. The Phase II tableau is then obtained by inserting the
original cost function at row 0:
x1 x2 x3 x4 RHS
f 1 −2 3 0 0
x1 1 0 0 1/2 2
x2 0 1 0 −3/2 2
x3 0 0 1 1 2
123
The big-M approach is also based on introducing artificial variables, but to get rid of
the artificial variables by assigning coefficients for those artificial variables in the original
objective function in such way as to make their presence in the basis at a positive level very
unattractive from the objective function point of view. Consider the following problem:
Minimize cT x + M (eT xa )
subject to Ax + xa = b,
x, xa ≥ 0.
Even though the starting solution x = 0, xa = b is feasible to the above problem, it has a
very unattractive objective value.
Therefore, the simplex method will try to get the artificial variables out the basis, and
then continue to find an optimal solution of the original problem.
Two cases may arise after solving the Big-M problem by the simplex method:
Case A: The Big-M problem has an finite optimal solution. There are two subcases:
• (x, xa ) = (x∗ , 0) is an optimal solution of the Big-M problem. Then x∗ is the optimal
solution to the original problem.
124
Proof. In fact, if x is an arbitrary feasible point of the original LP, then (x, 0) is a
feasible solution to the Big-M problem. Since (x∗ , 0) is the optimal solution to the
Big-M problem, we have
cT x∗ + 0 ≤ cT x + 0,
i.e.,
cT x ∗ ≤ cT x
In this case, the original LP has no feasible point, i.e., the problem is infeasible.
Proof. Suppose by contradiction that x is a feasible point of the original LP. Then
(xT , 0) is feasible for the Big-M problem. By optimality of (x∗ , x∗a ), we have
cT x∗ + M x∗a ≤ cT x
Case B: The Big-M problem has an unbounded optimal value. Assume that
zk − ck = max(zj − cj ) > 0 and yk ≤ 0, then we have the following two cases:
• all the artificial are equal to zero, then the original LP has an unbounded optimal
solution.
Proof. In this case, the optimal solution of the Big-M problem has the form (x∗ T , 0),
therefore x∗ is feasible for the original LP, but because (x∗ T , 0) is unbounded, then
x∗ is unbounded. This solution is also optimal because if it wasn’t there would be
another optimal solution x̄ such of the LP and (x̄, 0) would be a feasible solution of
the Big-M problem with better objective value, which is not allowed.
• not all the artificial are equal to zero, then the original LP is infeasible.
Proof. Omitted.
125
In summary:
Minimise x1 − 2x2
subject to x1 + x2 ≥ 2,
x2 ≤ 3,
x1 , x2 ≥ 0.
Adding slack variables x3 and x4 and artificial variables x5 , the Big-M LP can be written
as
Minimise x1 − 2x2 + M x5
subject to x1 + x2 − x3 + x5 = 2,
x2 + x4 = 3,
x1 , x2 , x3 , x4 , x5 ≥ 0.
Note that in the simplex method, we work with tableau (including the initial one) of the
form
xB xN RHS
f 0 cB B −1 N − cN cB B −1 b
xB I B −1 N B −1 b
126
where the reduce cost zj − cj are zero for all basic variables.
Thus we obtain the following initial tableau for the simplex method
x1 x2 x3 x4 x5 RHS
f M − 1 M+2 −M 0 0 2M
x5 1 1∗ −1 0 1 2
x4 0 1 0 1 0 3
x1 x2 x3 x4 x5 RHS
f −3 0 2 0 −(M + 2) −4
x2 1 1 −1 0 1 2
x4 −1 0 1∗ 1 −1 1
x1 x2 x3 x4 x5 RHS
f −1 0 0 −2 −M −6
x2 0 1 0 1 0 3
x3 −1 0 1 1 −1 1
subject to x1 + x2 + 2x3 ≤ 4,
−x1 + x3 ≥ 4,
x3 ≥ 3,
x1 , x2 , x3 ≥ 0.
Minimise −x1 − x2
subject to x1 − x2 − x3 = 1,
−x1 + x2 + 2x3 − x4 = 1,
x1 , x2 , x3 , x4 ≥ 0.
Bibliography
[Bea55] E. M. L. Beale. Cycling in the dual simplex algorithm. Naval Research Logistics
Quarterly, pages 269–275, 1955.
[Gre94] Harvey J. Greenberg. The use of the optimal partition in a linear programming
solution for postoptimal analysis. Operations Research Letters, 15(4):179 – 185,
1994.
[KM72] V. Klee and G.J. Minty. How good is the simplex algorithm? In O. Shisha,
editor, Inequalities, III, pages 159–175, 1972.
128