INDEX
UNIT 3
MARKOV CHAINS
3.1 INTRODUCTION
3.2 FORMULATION OF MARKOV CHAINS
3.3 STATISTICAL PROCESSES
3.4 FIRST ORDER MARKOV PROPERTY
3.5 STATIONARY TRANSITION PROBABILITY OF A
JUST PASS
3.6 STATIONARY TRANSITION PROBABILITY OF M
STEPS
3.7 ABSORBENT STATES
3.8 STATIONARY TRANSITION PROBABILITY OF
STABLE STATES
3.9 FIRST STEP TIMES
3.10 USE OF COMPUTER PROGRAMS
Unit 3
MARKOV CHAINS
3.1 Introduction
A Markov chain is a series of events, in which the probability that
an event occurs depending on the immediate previous event. Indeed, the chains
This type has memory. They "remember" the last event and this conditions the
possibilities of future events. This dependence on the previous event
distinguishes Markov chains from series of independent events, such as
toss a coin in the air or a die.
In business, Markov chains have been used to analyze the
purchase patterns of delinquent debtors, to plan the needs of
personal and to analyze the replacement of equipment.
The Markov analysis, named after a Russian mathematician who
developed the method in 1907, allows finding the probability that a system
is in a particular state at a given moment. Something else
Importantly, it allows for finding the average in the long run or the
steady state probabilities for each state. With this information you
it can predict the behavior of the system over time.
The hardest task is to recognize when it can be applied. The most important characteristic is
important to look for in the memory of one event to another.
3.2 Formulation of Markov chains
A Markov chain is a series of events, in which the probability that
an event occurs depending on the immediate previous event. In fact, chains
of this type have memory. They "remember" the last event and this conditions the
possibilities of future events. This dependence on the previous event
distinguishes Markov chains from independent event series, such as
to toss a coin in the air or a die.
Figure 4.1.1 shows the process for formulating a Markov chain.
The Markov generator produces one of n possible events, Ej, where j = 1, 2, . . .
n discrete time intervals (which do not have to be equal). The
the probabilities of occurrence for each of these events depend on the state
of the generator. This state is described by the last generated event. In the figure
4.1.1, the last generated event was Ej, so the generator is
in the state Mj.
The probability that Ek is the next generated event is a probability
conditional: P ( Ek / Mj ). This is called the transition probability from state Mj to
state Ek. To fully describe a Markov chain it is necessary
know the current state and all the transition probabilities.
Transition probabilities.
One way to describe a Markov chain is with a state diagram,
like the one shown in figure 4.1.2. This illustrates a Markov system
with four possible states: M1, M2, M3, and M4. The conditional probability or of
The transition of moving from one state to another is indicated in the diagram.
Another method to display the transition probabilities is to use a matrix of
transition. The transition matrix for the state diagram example is
show in table 4.1.1.
Another method to display transition probabilities is to use a matrix of
transition.
For n = 0, 1, 2, ....
The superscript n is not written when n = 1.
3.3 Stochastic Processes
A stochastic process is simply defined as an indexed collection of
random variables { X1 }, where the subscript t takes values from a set T
Given. Frequency T is often taken as the set of non-negative integers and X,
represents a measurable characteristic of interest at time t. For example, the
stochastic process, X1, X2, X3, .., Can represent the collection of levels of
weekly (or monthly) inventory of a given product, or it can represent the
weekly (or monthly) demand collection of this product.
A study of the behavior of an operating system during some
The period usually leads to the analysis of a stochastic process with the following
structure. At specific points in time t, the system is located
exactly in one of a finite number of mutually exclusive states and
exhaustive, labeled 0, 1, ..., S. The periods in time can
finding themselves at equal intervals or their scattering may depend on the
general behavior of the system in which the process is submerged
stochastic. Although the states can constitute a characterization both
qualitative as well as quantitative of the system, there is no loss of generality with the
numerical labels 0, 1, ..., M, which will be used hereafter to denote the
possible states of the system. Thus, the mathematical representation of the physical system
It is that of a stochastic process {Xi}, where the random variables are observed.
at t = 0, 1, 2, ..., and where each random variable can take the value of
any of the M + 1 integers 0, 1, .. , M. These integers are a characterization
of the M + 1 states of the process.
First Order Markov Property
It is said that a stochastic process has the Markov property if
P { Xt+1 = j | X0 = K0, X1 = K1, ..., Xt-1 = Kt-1, Xt = Kt-1} = P {Xt+1 | X1 = i }
for all t = 0, 1, ... and all
succession i, j, K0, K1, ..., Ki-1.
It can be shown that this Markovian property is equivalent to
establish a conditional probability of any future 'event' given
Any past event and the current state Xi = i are independent of the event.
past and only depends on the current state of the process. The probabilities
Conditional probabilities P{Xt+1 = j | Xt = i} are called transition probabilities. If for
each i and j,
P{ Xt+1 = j | | Xt = i } = p{X1 = j | X0 = i }, for all t = 0, 1, ....
So it is said that the transition probabilities (of one step)
they are stationary and are generally denoted by pij. Thus, having probabilities of
stationary transitions imply that transition probabilities do not change
over time. The existence of transition probabilities (of one step)
stationary also implies that, for each i, j and n (n = 0, 1, 2,...),
P{ Xt+n = j | Xt = i } = P{Xn = j | X0 = i }
For all t = 0, 1, . . . These conditional probabilities are almost always denoted
for and they are called n-step transition probabilities. Thus, es
simply the conditional probability that the random variable X,
starting in state i, it reaches state j after n steps
time units.
Like the conditional probabilities, must satisfy the
properties:
3.5 Stationary One-Step Transition Probability.
Example:
A camera store has a special model of camera in stock that
It is possible to order every week. Let D1, D2, ... be the demands of this chamber.
during the first, second, ..., week, respectively. It is assumed that the
Let them be independent and identically distributed random variables that have
a known probability distribution. Let X0 be the number of cameras that
At the moment of starting the process, X1 is the number of cameras available.
at the end of week one, X2 the number of cameras at the end of week two, etc.
Suppose that X0 = 3. On Saturday night, the store places an order that gives it
They deliver on Monday when the store opens. The store makes an order that
They will deliver it on Monday at the time of opening the store. The store uses the following
policy (s, S)1 to order: if the number of cameras in inventory at the end of the
week is less than s=1 (there are no cameras in the store), sort (up to) S=3.
Otherwise, it does not place the order (if there is one or more cameras in the
warehouse, the order is not placed). It is assumed that sales are lost when the
demand exceeds inventory. So, {X1} for t = 0, 1, .. is a process
stochastic in the form just described. The possible states of
the process consists of the integers 0, 1, 2, 3 that represent the possible number of cameras
in inventory at the end of the week.
Note that {Xi}, where Xi is the number of cameras in the warehouse at the end of
the week t (before receiving the order), is a Markov chain. It will be seen
Now how to obtain the transition probabilities (one step), that is, the
elements of the transition matrix (one step).
Assuming that each Dt has a Poisson distribution with parameter .
To obtain it is necessary to evaluate If ,
So Therefore, it means that the demand
during the week was of three o more cameras.
Thus, the probability that a variable
Poisson random with parameter take the value of 3 or more;
y it can be obtained in a similar way. Yes ,
then In order to obtain , the demand during the
week must be 1 or more. For this, . For
to find , note that yes .
Consequently, if then the demand during the week has to be
exactly 1. therefore, The remaining elements are
they obtain in a similar way, which leads to the following matrix of
transition (of a step):
3.6 Transition probability of 'n' steps
The Chapman-Kolmogorov equations provide a method to calculate
these n-step transition probabilities
:
These equations simply indicate that when going from state i to state j in n
steps, the process will be in some state k after exactly m (less than
what n) steps. Thus,
It is only the conditional probability that, if one starts in state i, the
process goes to state k after m steps and then to state j in n - m
steps.
The special cases of m=1 and m=n-1 lead to the expressions
For all i, j, and n such that the transition probabilities of n steps result
they can be obtained from the transition probabilities of one step
recursive manner. For n=2, these expressions become
Note that the they are the elements of the matrix P(2), but it must also
it is observed that these elements, are obtained by multiplying the matrix of
transition of a step by itself; that is, P(2) = P * P = P2.
In more general terms, it is concluded that the probability matrix of
The transition of n steps can be obtained from the expression: P(n) = P * P .... P = Pn =
PPn-1 = Pn-1 P.
Then, the n-step transition probability matrix can be
obtain by calculating the n-th power of the one-step transition matrix. For
for not very large values of n, the n-step transition matrix can be calculated
in the way just described, but when n is large, such calculations
they result tedious and, even more so, rounding errors can cause
inaccuracies.
Example:
A camera store has a special model of camera in stock that is
You can schedule each week. Let D1, D2, ... be the demands of this chamber.
during the first, second, ..., week, respectively. It is assumed that the
They are independent and identically distributed random variables that have
a known probability distribution. Let X0 be the number of chambers that are
At the moment of starting the process, X1 is the number of cameras that are available.
at the end of week one, X2 the number of cameras at the end of week two, etc.
Suppose that X0 = 3. On Saturday night the store places an order that gives it
they deliver on Monday at the moment the store opens. The store places an order that
they deliver it on Monday when the store opens. The store uses the following
policy (s, S)1 to order: if the number of cameras in inventory at the end of the
week is less than s =1 (there are no cameras in the store), sort (up to) S=3.
Otherwise, it does not place the order (if there is one or more cameras in the
warehouse, the order is not placed). It is assumed that sales are lost when the
demand exceeds inventory. Then, {X1} for t = 0, 1, .. is a process
stochastic in the way just described. The possible states of the
the integers 0, 1, 2, 3 that represent the possible number of rooms
in inventory at the end of the week.
So, given that you have a camera at the end of a week, the probability that not
there are cameras in inventory two weeks later it is 0.283; it is
to say Similarly, since there are two cameras at the end of
In a week, the probability of having three cameras in Warehouse Two.
weeks later it is 0.097; that is,
The four-step transition matrix can also be obtained from the following
way
P(4) = P4 = P(2) * P(2)
Thus, since there is a chamber left at the end of a week, 0.282 is the
probability that there are no cameras in inventory 4 weeks later; it is
to say Similarly, since there are two cameras left in the warehouse
At the end of a week, there is a probability of 0.171 that there will be three cameras.
in the warehouse 4 weeks later; that is,
3.7 Absorbing States
It is said that a state is absorbing if the probability of making a
transition out of that state. Therefore, once the system makes a
transition to an absorbing state, remains in it forever
3.8 Stationary transition probabilities of steady states.
Theorem
Let P be the transition matrix of a chain of M states. There exists then a
vector such that
It is established that for any initial state i, .
The vector it is often called steady-state distribution, or
also equilibrium distribution for the Markov chain. To find the
stationary probability distribution for a given chain whose matrix
The transition is P, according to the theorem, for large n and for all i.
, (1)
As Pij (n + 1) = (row i of Pn)(column j of P), we can write
(2)
Example:
Suppose that the entire soft drink industry produces two colas. When one
the person has bought ticket 1, there is a 90% probability that their next
buys cola 1. If a person bought cola 2, there is an 80% chance of
probabilities that your next purchase will be cola 2.
So:
By replacing the second equation with the condition ,
we obtain the system
It turns out that when clearing Therefore, after a long time,
There is a 2/3 probability that a given person will buy cola and a 1/3 probability.
that a person buys soda 2.
3.9 First step times.
It is often convenient to be able to make statements in terms of
probabilities about the number of transitions the process makes when going from a
state i to state j for the first time. this interval is called first time periods
step from state i to state j. when J=i, this first step time is exactly
the number of transitions until the process returns to the initial state i. In this
In this case, the first passage time is called the recurrence time for state i.
To illustrate these definitions, let us reconsider the following example:
A camera store has a special model of camera in stock that is
You can order each week. Let D1, D2, ... be the demands of this chamber.
during the first, second, ..., week, respectively. It is assumed that the
They are independent and identically distributed random variables that have
a known probability distribution. Let X0 be the number of rooms that
at the moment of starting the process, X1 is the number of cameras that are available.
At the end of week one, X2 the number of cameras at the end of week two, etc.
Suppose that X0 = 3. On Saturday night, the store places an order that it
will be delivered on Monday at the time of opening the store. The store places an order that
They deliver it on Monday at the time of opening the store. The store uses the following
policy (s, S)1 for ordering: if the number of cameras in inventory at the end of the
week is less than s =1 (there are no cameras in the store), order (up to) S=3.
Otherwise, it does not place the order (if one or more cameras are available in the
warehouse, the order is not placed). Sales are supposed to be lost when the
demand exceeds inventory. Then, {X1} for t = 0, 1, .. is a process
stochastic in the manner just described. The possible states of
process are the integers 0, 1, 2, 3 that represent the possible number of chambers
in inventory at the end of the week.
Where Xt is the number of cameras in inventory at the end of week t and it
begins with Suppose the following occurred:
In this case, the first passage time from state 3 to state 1 is 2.
weeks, the time of the first step to go from state 3 to state 0 is 3
weeks and the recurrence time of state 3 is 4 weeks.
In general, first passage times are random variables and, therefore,
they have an associated probability distribution. These distributions of
Probabilities depend on the transition probabilities of the process. In
particular denotes the probability that the first passage time of the
state i and j can be equal to n. It can be demonstrated that these probabilities satisfy
the following recursive relations:
Then it is possible to calculate the probability of a first passage time of
state i to j in n steps, recursively, based on the probabilities of
one-step transition. In the example, the probability distribution of the times
From the first step of state 3 to state 0, it is obtained as follows:
For fixed i and j, the they are non-negative numbers such that
This sum can be less than 1, which means a process that the start
it is in state i and may never reach state j. When the sum is
equal to 1, the can be considered as a distribution of
probability for the random variable, the time of first step.
To obtain the expected first passage time from state i to state j.
Sea which is defined as:
then uniquely satisfies the equation:
When i=j, it is called expected recurrence time.
When applying it to the inventory example, these equations can be used to
calculate the expected time until there are no cameras left in the warehouse,
assuming that the process starts when three cameras are available; that is, it
you can obtain the expected first step time Like all the states
recurrent, the system of equations leads to the expressions
The simultaneous solution of this system is
So the expected time until the store runs out of cameras
It is 3.50 weeks.
Application Case
to the administration: Personnel Planning.
The transition analysis can be useful when planning to meet the needs of
personal. Many companies employ workers of different levels of
classification within the same job category. This is common for
trusted personnel, office workers, skilled workers, unskilled workers, and staff
professional. The firm must have the number of employees at each level of
classification to provide the opportunity for appropriate promotion, fulfill
with the necessary skills for the job and managing the payroll. A
appropriate long-term personnel planning requires considering the
movement of people both upward in the ranking classification as
outside the organization. Markov analysis can help in this
planning effort.
The movement of personnel to other classifications can be considered as a
Markov chain. It is assumed that there are three classifications; degree 1 is the most
lower. In addition, descents are considered rare and are omitted. The status 'leave'
it is absorbing, which includes resignations, terminations, dismissals, and deaths. For
supposedly, all employees eventually reach this state.
The transitions from grade 1 to grade 2 and from grade 2 to grade 3 represent
promotions. As probability transitions, they are controlled by the signature.
the level that the signature deems necessary to comply can be established
its objectives. For example, suppose that the firm currently has 30
employees of grade 3, 90 employees of grade 2 and 300 employees of grade 1 and that
wants to maintain this level of employees during the next year. From experience,
It is expected that 30% of grade 1 employees leave the company each year, 20% of the
employees of grade 2 and 10% of those who are in grade 3. If the policy
is to hire only at the lowest classification levels, how many must be
hire and how many should be promoted next year to maintain stability
the levels ?.
This problem can be solved without Markov analysis, but the model is useful.
to help conceptualize the problem. Since it is just a cycle, it is used
the transition analysis. The analysis starts with the highest grade. No action is taken
promotions but the 10%, that is, 3, sales. All of them must be replaced by
promotions of grade 2. In the classification level, 20% exits and must be
promote 3, with a loss of 21. This should be compensated for by promotion of
Grade 1. When moving to grade 1, 30% leave and 21 must be promoted, which is a
total loss of 111. Therefore, the following year 111 employees must be hired
of level 1.
In this example, some transition rates are derived from
external considerations.
CONCLUSION
In conclusion, Markov chains allow us to perform analysis on the
study of the behaviors of certain systems during certain periods that
They can be short or long. Additionally, there is a relationship with probabilities.
absolute. But however, the most important thing is the study of behavior
long-term systems, when the number of transitions approaches infinity
By knowing more or less the probabilities of an experiment, this in turn leads us to
will allow to know in the short and long term the states in which they would be during periods or
future times and make decisions that will affect or benefit our
interests, and make a decision consciously and not many are commented on
errors.
This will also provide us with an estimated time to identify each
state and the period in which it is with the implementation of a process,
probabilities are also established as another tool in the
Markov chains.
COMPARATIVE TABLE
CHARACTERISTICS ADVANTAGES DISADVANTAGES
A Markov chain 1.-Markov theory is 1.-There is no set
it is a series of events, simple to apply and of closed solutions.
in which the probability understand. 2.-Each change in the
of an event occurring input variables
it depends on the event 2.- Markov Theory is requires a solution
immediate previous. simple to apply and separated or set of
understand. executions.
3.-The models of
complex simulation
they can require a lot
time to build them and
execute them.
It can be.
it's difficult to establish the
validity of the model (is
to say, the correspondence
with the real system).
UNIT 4
NETWORK OPTIMIZATION
INTRODUCTION
Network problems arise in a wide variety of situations. Network of
transport, electricity, and communications predominate in daily life. The
network representation is widely used in diverse areas such as
production, distribution, project planning, location of facilities,
resource management and financial planning, to name just a few
examples. In fact, a representation of networks provides an overview
general so powerful and a conceptual aid to visualize the relationships between
the components of the system, which is used in almost all scientific areas,
social and economic. One of the biggest recent developments in research
Operations (IO) has been the rapid advancement in both methodology and
application of network optimization models. The emergence of some
algorithms have had a significant impact, just like the ideas from the sciences of
computing about data structures and the efficient manipulation of them
same. Consequently, there are now algorithms and packages of
computer and are used routinely to solve very large problems
that could not have been managed two or three decades ago. They will be announced.
in this work five important types of network problems and some ideas
basics on how to solve them (without delving into the aspects of structures of
databases, so vital for the successful application in major problems
scale). The first three types of problems - the shortest path problem, the
the minimum spanning tree problem and the maximum flow problem have
a specific structure that frequently emerges in practice. The fourth type–
the minimum cost flow problem - provides a unifying approach to
many other applications due to their much more general structure. And finally the
CPM method.
INDEX
UNIT 4
NETWORK OPTIMIZATION
4.1 TERMINOLOGY
4.2 THE SHORTEST PATH PROBLEM. CYCLIC AND NON-CYCLIC NETWORKS
Cyclic
4.3 MINIMUM EXPANSION TREE PROBLEM
4.4 MAXIMUM FLOW PROBLEM
4.5 MINIMUM COST FLOW PROBLEM
4.6 LINEAR PROGRAMMING IN NETWORK THEORY
4.7 USE OF COMPUTER PROGRAMS
4.1 TERMINOLOGY
Network: a set of points and lines that connect certain pairs of points.
Nodes: Points (or vertices).
Arches: Lines, ligatures, edges or branches. They are labeled to name the
nodes at their terminal points.
Directed arc: If flow through an arc is allowed only in one direction.
direction is indicated by adding an arrowhead at the end of the line that
represent the arc.
Undirected arc: If the flow through an arc is allowed in both directions.
Directed graph: Graph that has only directed arcs.
Undirected graph: All its edges are undirected.
Trajectory: A succession of distinct arcs that connect nodes.
that begins and ends at the same node.
Connected network: A network in which each pair of nodes is connected.
Tree: Connected graph (for some subset of n nodes) that does not contain cycles.
directed.
Expansion tree: Connected network for the n nodes that contains cycles not
directed.
Arc capacity: Maximum flow amount (possibly infinite) that can flow through.
in a directed arc.
Source node: Origin node, has the property that the flow that exits the node
exceeds the flow that enters it.
Demand node: Destination node, where the incoming flow exceeds the outgoing flow.
of him.
Transshipment node: Intermediate, satisfies the conservation of flow, that is, the
the flow that enters is equal to the flow that exits.
4.2 SHORTER PATH PROBLEM CYCLIC AND NON-CYCLIC NETWORKS
The shortest path problem includes a game of connected nodes where
only one node is considered as the origin and only one node is considered as
the destination node. The objective is to determine a path of connections that minimize
the total distance from the origin to the destination. The problem is solved by the "algorithm of
"labeling." A classic Operations Research algorithm is the one of the
Shortest Route,
It is used, for example, to find in a series of cities connected by
roads, the route to get from one city to another, following a trajectory
minimum. There are two main types of algorithms: cyclic and acyclic:
ACYCLIC ALGORITHMS ARE: used in networks that have no cycles, it is
to say that they have no routes that starting from a node take it back to itself again.
Cycles are also called 'loops'.
CYCLIC ALGORITHMS: are for networks that have cycles or loops... or
in Spanish turning in circles. An example of a loop: If from node 'A' I can go to
node 'B', and from node 'B' I can go to 'C' and from 'C' to 'D' and from 'D' I can return to
"A" again, there is a bond or a cycle. The arrows indicate in which direction it is.
movement allowed
4.3 MINIMUM EXPANSION TREE PROBLEM
It is about finding the shortest distance or cost route from the starting point.
or the initial node and the destination or terminal node.
EXAMPLE:
A car rental company is developing a plan to
replacement of your fleet for the next five years. A car must be
in service for at least one year before it is considered to be replaced. The
Table 8-1 summarizes the replacement cost per unit (in thousands of units)
monetary) as a function of time and the number of years in operation. The cost
includes the purchase, insurance premium, operation and maintenance.
This problem can be represented by a network as follows. Each year is
represented by a node. The "length" of a branch that connects two nodes is
equal to the replacement cost associated with that given in table 8-1. Figure 8.6
represent the network. The problem reduces to determining the shortest "path" of the node
1 to 5. The shortest "route" can be determined using an algorithm that
We will represent in section 8.3.2. the optimal solution will produce the route 1 - 2 - 5.
Year 1 2 3 4 5
1 4.0 5.4 9.8 13.7
2 4.3 6.2 8.1
3 4.8 7.1
4 4.9
13.7
9.8
5.4
1 2 3 4 5
4.3 4.8 4.9
4 6.2 8.1 7.1
Figure 8-6
With a total cost of 4 + 8.1 = 12.1 (thousands of monetary units). This means
to say that each car must be replaced in the second year of use and discarded
to the fifth year.
Let's apply the procedure to the network in figure 8-10. A basic hypothesis of the
The algorithm is that all distances in the network are non-negative.
4.3 MINIMUM EXPANSION TREE PROBLEM
This problem considers an undirected and connected network. It must find
a spanning tree with the minimum length of its arcs.
Algorithm for the minimum spanning tree problem.
1.- arbitrarily select any node and connect it (i.e., it
add a link to the nearest different node.
2.- the nearest unconnected node is identified to a connected node and it
connect these two nodes (that is, a binding is added between them). this step
It repeats until all nodes are connected.
3.- Ties. The ties for the nearest different node (step 1) or for the
nearest connected node (step 2), they can break in an arbitrary way, and the
the algorithm must reach an optimal solution. However, these ties are
signal that optimal solutions may exist (but not necessarily)
multiple. All those solutions can be identified if you work with the others
ways to break ties until the end.
The fastest way to execute this algorithm manually is the approach
graph illustrated below.
T
2 2 5
0 5 B 4 D
3 1 7
4 1
C E
4
Arbitrarily, node 0 is selected as the start. The unconnected node.
closest to 0 is A. node A is connected to node 0.
T
2 2 5
0 5 B 4 D
3 1 7
4 1
C E
4
The closest unconnected node to either node 0 or A is node B
(closer to A). Node B is connected to node A.
A
7
2 2 5 T
5 4
0 D
3 1 7
4 1
C E
4
The nearest unconnected node to 0, A, or B is node C (closest to B).
connect node C to node B.
A
7
2 2 5 T
5 4
0 B D
3 1 7
4 1
C E
4
The closest unconnected node to 0, A, B, or C is node E (closest to B). It connects.
the node E to node B.
A
7
2 2 5 T
5 4
0 B D
3 1 7
4 1
C E
4
The closest unconnected node to nodes 0, A, B, C, or E is node D (closest to
E). Node D is connected to Node E.
A
7
2 2 5 T
5 4
0 B D
3 1 7
4 1
C E
4
A
7
2 2 5 T
5 4
0 B D
3 1 7
4 1
C E
The only disconnected node is node T. It is closer to node D. it connects
the node T to the node D.
All nodes have been connected, so this is the (optimal) solution.
what was being sought. The total length of the branches is 14 miles.
Although with this procedure it may seem at first glance that the choice of
the initial node would affect the final solution (and the total length of the bonds), in
reality is not like that. It is suggested to verify this fact for the example,
reapplying the algorithm, but with an initial node different from 0.
It is considered that within this chapter the problem of the spanning tree
minimum is the one that falls within the broad category of network design. In this
category, the goal is to design the most appropriate network for the given problem (with
frequency is about transportation systems) and not about analyzing a network already
designed. Reference 8 provides research in this important area.
4.4 MAXIMUM FLOW PROBLEM
In a network with capacity flow on the arcs, the problem is to determine the
maximum possible flow coming from the origins in such a way as to drown the
flow capacities of the arcs. Consider a network with m nodes and n arcs with
a simple flow of goods. Denote the flow arc (i to j) as Xij. We associate each
arch to a capacity
of flow, kiw. In this network,
we wish
find the flow
total maximum in the
red of the node.
In the formulation of linear programming, the objective is to maximize F. The amount that
part of the origin through various routes. For each intermediate node, what comes in must be equal to what goes out.
sale. On some routes, the flows can take both directions. The capacity that can
being sent to a particular address is also displayed on each route.
4.5 MINIMUM COST FLOW PROBLEM
The minimum cost flow problem has a central position among the
network optimization problems; first, it encompasses a broad class of
applications and secondly, its solution is very efficient.
All the previous network problems are special cases of the flow problem.
of minimum cost. Just like the maximum flow problem, this considers flows
in networks with capabilities. Just like the shortest path problem, this
consider a cost per flow towards an arc. Just like the transportation problem,
this allows multiple origins and destinations. Therefore, all these problems
they can be seen as special cases of the minimum cost flow problem.
The problem is to minimize the total cost subject to availability and demand.
some nodes, and the upper flow connection through each arc.
La solución óptima es: X12 = 12, X13 = 8, X23 = 8, X24 = 4, X34 = 11, X35 = 5,
X45 = 10, all other Xij = 0. The optimal cost is $150
4.6 LINEAR PROGRAMMING IN NETWORK THEORY
Linear programming is currently the most used mathematical technique.
currently thanks to the fact that the simplex algorithm is very efficient and to the development of
computing. What is sought with the application of linear programming is
solving common and also very varied problems of the company where
In general, there are needs to be met with a certain number of resources.
limited or scarce and with the aim of achieving it optimally.
Example
A company has stopped manufacturing certain products, thereby freeing up the
production loads that their teams had in the departments of
machining. Now there are machine hours that can be used in the
Products referred to as 1, 2, 3 as follows:
Hours per product piece
1 2 3 per week
Milling machine 9 3 5 500
Lathe 5 4 - 350
Grinder 3 - 2 150
Utility
$/ piece 50 20 25
Minimum Minimum Minimum Recommendation
Dept. Sales to Prod. 30 15 20
Formulate a Linear Programming model for this problem
Definition of variables to be used in the linear programming method
Let: Xj = number of pieces of product j (j=1,2,3) to be manufactured to maximize the
utility.
Economic function and objective:
MAX Z= 50X1 + 20X2 + 25X3 [ (Dollars/Unit) (Unit/Week)] = [Dollars/Week]
Subject to restrictions on machine hours available per week
Milling machine: 9X1 + 3X2 + 5X3 * 500 machine hours for milling machine
Lathe: 5X1 + 4X2 * 350 machine hours lathe
Grinder: 3X1 + 2X3 * 150 hours grinding machine
Conditions of signs for the variables:
X1 * 30 pieces
X2 * 15 pieces
X3 * 20 pieces
CONCLUSION
Network optimization models are a very simple tool.
to find the optimal solution to network flow problems, because
they provide easy-to-understand and apply algorithms that compared to the
simplex method reduces the number of iterations that solve the problem.
If the simplex method were applied in a distribution or network problem,
we would have many variables and constraints in the model and it would have to be used
computational tools to find the optimal solution of a shape
fast, now with the network models, you just need to apply the iterations to
graph that originates the representation of the network of the problem and then apply the
algorithm that corresponds, which can be the shortest path algorithm,
algorithm to find the minimum spanning tree, path algorithm
of increase or the maximum flow algorithm.
Although the minimum cost flow problems and the shortest path problem can
formulate as linear programming models to then apply the method
simplex, it is not advisable to use it. On the other hand, solve the problem.
using networks improves the efficiency of calculations.
COMPARATIVE CHART
Characteristics DISADVANTAGES
Many problems 1. They can be resolved 1.- Develop a network
commercials can be very quickly. clear and logical is another area
resolved through Problems that of certain difficulty,
network models, in the with programming there seems to be no way
which are not needed they would have linear to ensure that the
additional restrictions 1000 rows and 30,000 networks reflect accurately.
to obtain the solution columns can the optimal conception of
which is resolved by to be resolved in the ones in charge of
small algorithms, no seconds. This plan the work.
import the size of allows the
problem. network models 2.-Determination of the level
they are used in correct detail of the
many net. Many companies
applications (such they tend to vary the detail
like the takeover of of the networks,
decision in real time depending on the level
real) for which the administrative to which it
programming drive and the use
linear is not it that will be given to them.
ideal.
They require in
natural form of
solutions
entire.
to recognize that a
problem can
to become such as
some model of
network will allow us
resolver types
specials of
problems of
programming
entire
increasing the
efficiency and
reducing the
time spent
for the algorithms
classics of
programming
linear