KEMBAR78
Chapter 1 | PDF | Markov Chain | Stochastic Process
0% found this document useful (0 votes)
51 views13 pages

Chapter 1

Uploaded by

abdelizohr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views13 pages

Chapter 1

Uploaded by

abdelizohr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Stochastic Modeling and Simulation

Dr. Nawel ARRAR


National School of Arti…cial Intelligence
2023-2024

1
Part I

Stochastic Processes and their


Classi…cation

3
Chapter 1

Discrete-Time Markov Chains

We cover Markov chain in depth, starting with Discrete-Time Markov Chains


(DTMCs). In a DTMC, the world is broken up into synchronized time step.
An event (arrival and departure) can only occur at the end of a time step.
This property makes DTMCs a little odd for modeling computer science.
However, there are many other problems that are well modeled by DTMCs.
In Continuous-Time Markov Chains (CTMCs) events can happen at any
moment in time. This makes CTMCs convenient for modeling system.

1.1 Stochastic Processes


De…nition 1.1 A stochastic process (or random process) is a sequence of
random variables Xt :
fXt ; t 2 T g
and de…ned on the same probabilized space f ; A; Pg. The parameter t is
generally interpreted as time and belongs to a given set.
A continuous-time process is one in which the set T is uncountable
(usually R+ ). It is denoted by fX(t); t 0g.
A discrete-time process occurs when T is …nite or at least countable
(most often T = Z+ ). It is denoted by fXn ; n 0g ; we then speak of a
stochastic sequence, while the term process is reserved for the continuous
case.

De…nition 1.2 The set of all possible values of the variables de…ning a sto-
chastic process is called the state space of the process, and is denoted S. If

5
6 CHAPTER 1. DISCRETE-TIME MARKOV CHAINS

this set is …nite or in…nite countable containing many values (or states), the
process is called a chain.

De…nition 1.3 A random process fXn ; n = 0; 1; 2; :::g is Markovian where


Xn denotes the state at (discrete) time step n and such that for all states
i0 ; :::; in 1 ; in ; in+1 and all integer n 0

P [Xn+1 = in+1 =Xn = in ; Xn = in 1 ; :::; X0 = i0 ] = P [Xn+1 = in+1 =Xn = in ] ;


1
(1.1)
it is independent of the time step and of past history.

De…nition 1.4 The Markovian property states that the conditional dis-
tribution of any future state Xn+1 ; given that the past states X0 ; X1 ; :::; Xn 1 ;
and given the present state Xn ; is independent of past states and depends
only on present states Xn :

Remark 1.5 The process is also said to be memoryless.

1.2 Discrete-Time Markov Chains


Gathering the di¤erent notions presented above, we …nally obtain the follow-
ing de…nition

De…nition 1.6 A Discrete-Time Markov Chain is a stochastic process


fXn ; n 0g satisfying the following three restrictions:

The process has a discrete time;

the states space S is …nite or countable;

the process satis…es the Markov property (1.1).

In the following, we impose two additional restrictions on the DTMC. The


…rst concerns the state space S, which we consider to be of …nite cardinality.
The second restriction corresponds to the time homogeneity property.

De…nition 1.7 A Markov chain MC is homogeneous (over time) if the


probability of making a transition from one state to another is independent
1.2. DISCRETE-TIME MARKOV CHAINS 7

of the time at which the transition takes place. In other words, for all instant
n 0 and for any pair (i; j) ;
P [Xn = j=Xn 1 = i] = P [Xn+k = j=Xn+k 1 = i] ; (1.2)
for all k 0:
Remark 1.8 Note, however, that the results presented remain valid for ho-
mogeneous chains with an in…nite but countable number of states.

1.2.1 Representative Graphs


A MC or more precisely its transition matrix, can be represented by an
oriented graph G whose vertices correspond to the states of the chain and
where an arc connects the vertices associated with the states i and j if the
transition probability from i to j is positive, the graph thus de…ned is called
the representative graph of the Markov chain.

1.2.2 Probabilities and matrix transition


Conditional probabilities do not vary over time. We can therefore simplify
the notations and de…ne the probabilities
pij = P [Xn = j=Xn 1 = i] (1.3)
for any pair of (i; j) independently of n.
De…nition 1.9 The probability pij is called probability of transition (or
of passage) from the state i to the state j in one step and is equal to the
conditional probability that the system will be in state j at the next step, given
that it is currently in state i:
If a MC has s = jSj states, it exists s2 probabilities of transition which
can be arranged in a square matrix s s. We suppose that states of the
chain are numbered from 1 to s; and we then speak of state k rather than
the k th state of the chain in the selected order.
De…nition 1.10 The transition probability matrix associated with any
DTMC is a matrix
P = (Pij ) = (pij );
whose entry in row i and column j is equal to the probability of moving to
state j on the next transition, given that the current state is i.
8 CHAPTER 1. DISCRETE-TIME MARKOV CHAINS

Pij = (pij ):
Transition matrices are also called stochastic matrices and satisfy the
following two conditions:

1. their elements are non-negative:

pij 0; 8i; j 2 S;

2. the sum of the elements of each row is equal to 1 :


X
pij = 1; i 2 S:
j2S

Example 1.11 (Repair facility problem) A machine is either working


or in the repair center. If it is working today, then there is a 95% chance
that it will be working tomorrow. If it is in the repair center today, then
there is a 40% chance that it will be working tomorrow. We are interested
in questions like "what fraction of time does my machine spend in the repair
shop?"
Question 1. Describe the DTMC for the repair facility problem.
Question 2. Now soppose that after the machine remains broken for
4days, the machine is replaced with a new machine. How does the DTMC
diagram change?

1.2.3 Power of P : n-Step Transition Probabilities


The transition probabilities pij and more generaly P , describe the evolution
of the chain state step by step. It is also possible to calculate transition
probabilities in several steps. We introduce the notation
(m)
pij = P [Xn+m = j=Xn = i]

for the conditional probability of going from i toj in exactly m steps.


(m)
The probabilities pij are called transition probabilities in m steps
(m)
and the matrix P (m) whose element (i; j) is equal to pij is called the tran-
sition matrix in m steps.
1.2. DISCRETE-TIME MARKOV CHAINS 9

To calulate transition probabilities in m steps, the probability that a


process goes from state i to state j in two steps,
(2)
pij = P [X2 = j=X0 = i]
X
= P [X2 = j=X1 = k] P [X1 = k=X0 = i]
k2S
X X
= pkj pik = pik pkj :
k2S k2S

(2)
So pij is simply equal to the element (i; j) of the matrix P 2 :

(m)
Theorem 1.12 The probability pij that a MC can be in state j after m
states, if it is currently in state i, is given by the element (i; j) of the matrix
P m:

In matrix form, this property becomes

(m)
P (m) = pij = Pm

P m = P:P:::P; multiplied m time. We can use the notation Pijm to denote


(P m )ij :

Example 1.13 Repair facility problem


Now we consider again the simple repair facility problem, with general
transition probability matrix P :

1 a a
P = ; 0 < a < 1; 0 < b < 1:
b 1 b

We prove by induction that


" #
b+a(1 a b)n a a(1 a b)n
n a+b a+b
P = b b(1 a b)n a+b(1 a b)n
a+b a+b
b a
lim P n = a+b
b
a+b
a :
n!1
a+b a+b
10 CHAPTER 1. DISCRETE-TIME MARKOV CHAINS

1.2.4 Classi…cation of Chain States


There are several types of states. The classi…cation of the states of a Mrkov
chain plays an essential role in the study of its long-term behavior.
De…nition 1.14 Let i; j be two states (i; j 2 S)
We say that i leads to j if there is an integer n 0 such that pni;j >
0 (6= 0) :
We say that i and j communicate if i leads to j and j leads to i:
Note that i and j do not communicate if one (or both) of the equalities
is satis…ed.
pnij = 0 8n 0
pnji = 0 8n 0:

Recurrence
De…nition 1.15 Let (Xn ) be a Markov chain and let i be a …xed state. For
any integer n 1
fiin = P [Xn = i; Xk 6= i; k = 1; 2; ;n 0
1=X0 = i] ; fi;i =0
the probability that if the initial state is i, the …rst transition to state i occurs
at the nth transition.
De…nition 1.16 A state i is recurrent if
fii = P (returning to state i=starting from state i (initial)) = 1:
Otherwise (fii < 1), the state is transient.
P1
Theorem 1.17 A state i is recurrent if and only if n=0 pnii = 1:
Corollary 1.18 If i and j communicate and i is recurrent, then j is recur-
rent.
P
Proposition 1.19 fii1 = pii and pnii = nk=0 fiik piin k :
De…nition 1.20 A recurrent state i is zero recurrent if
i = E [time of …rst return to i=departure from i] = 1
Otherwise ( i < 1), the state is positive recurrent. Here, time is mea-
sured in number of transitions.
1.2. DISCRETE-TIME MARKOV CHAINS 11

Periodicity
De…nition 1.21 The period d (i) of a state i is de…ned as

d (i) = gcd fn 1 : pnii > 0g

Conventionally, d (i) = 0 if the above set is empty. A state i is periodic


if d (i) > 1:
Otherwise (d (i) = 0 or 1), a state is aperiodic if it has period 1. A chain
(Xn ) is said to be aperiodic if all of its states are aperiodic.

Remark 1.22 If i and j communicate, so d (i) = d (j) :

So aperiodicity is clearly necessary for the limiting probabilities to exit.


However in an aperiodic Markov chain, it could still turn out that the limiting
probabilities depend on the start state, wherease we want pij be the same for
all i.
If we also want the limiting probabilities to be independent of the start
state, we need one more condition, known as irreducibility, which says that
from any state one can get any other state.

De…nition 1.23 A Markov chain is said to be irreducible if all states com-


municate with each other

Ergodicity
De…nition 1.24 An aperiodic and positive recurrent state is said to be er-
godic. This is particularly true for a state i such as pii = 1, which is said to
be absorbing.

Example 1.25 Let a DTMC on the states 0; 1; 2; 3; 4 and 5 with the transi-
tion probabilities on one step given by the matrix
0 1
0 1=3 1=3 0 1=3 0
B 0 0 0 1=2 0 1=2 C
B C
B 0 1=2 0 0 0 1=2 C
P =B B 0
C:
B 0 0 0 0 1 CC
@ 1=2 0 1=2 0 0 0 A
0 1 0 0 0 0
Describe the type of its states.
12 CHAPTER 1. DISCRETE-TIME MARKOV CHAINS

1.2.5 Stationary distribution


Limiting Probabilities
We now move on to looking at the limit. Consider the (i; j) th entry of the
power matrix P n for large n :

lim Pijn = lim P n :


n!1 n!1 ij

This quantity represents the limiting probability if being in state j in…-


nitely far into the future, given that we started in state i:

De…nition 1.26 Let


(n)
j = lim Pijn = lim pij :
n!1 n!1

represents the limiting probability that the chain is in state j (inde-


j
pendent of the starting state i). For an n-state DTMC, with states 0; 1; :::; n
1;
X
n 1
! = ( ; ; :::;
0 1 n 1 ) ; where i = 1
i=0

represents the limiting distribution of being in each state.

For the rest of this chapter, we assume that the limiting probabilities
exist.

Stationary Distribution
Let’s consider a system evolving randomly over time, and assume that the
evolution of the state of this system can be modeled by a Markov chain.
Among the key questions for evaluating the performance of such a system,
we can cite the following:

1. What is the probability of the system returning to state i after n tran-


sitions?

2. What is the proportion of time that the system spends in state i?

3. If the system is initially in state i, how many transitions will it make,


on average, before visiting state j for the …rst time?
1.2. DISCRETE-TIME MARKOV CHAINS 13

First of all, let’s remember that the state of a Markov chain at a given
time obviously depends on the transition probabilities pij , but also on the
initial state such that:
(0)
i = P (X0 = i) ; i 2 S:
Remark 1.27 The special case where the initial state is known, corresponds
to a vector (0) whose components are all zero, except for the component as-
sociated with the initial state, which is 1.
(1)
Starting from an initial distribution (0) , the probability i that the
system wil be in state i after a transition is as follows
(1)
X
i = P (X1 = i) = P (X1 = i=X0 = j) P (X0 = j)
j2S
X (0)
X (0)
= pji j = j pji :
j2S j2S

In matrix form, this equality can be written as


(1) (0)
= P:
In addition, using the homogeneity of transition probabilities, we also
have
(n)
= (n 1) P; n = 1; 2; :::
hence the following result:
Theorem 1.28 If the distribution of the initial state of a Markov chain is
given by the probability vector (0) , the distribution of the chain states after
n steps is
(n)
= (0) P n (1.4)
where P is the transition matrix of the chain.
This study seeks to determine, as the number n of steps goes to in…nity,
if the distribution (n) converges to a distribution ; the latter must not be
a¤ected by a transition. Indeed, if (n) converges to ; (n+1) converges to
the same limit,
(n+1)
= (n) P
and the matrix P has all its components bounded, so satis…es
= P:
14 CHAPTER 1. DISCRETE-TIME MARKOV CHAINS

De…nition 1.29 A probability distribution on the states of a Markov chain


is invariant or stationary if it satis…es

= P: (1.5)

De…nition 1.30 A Markov chain for wich the limiting probabilities exist
is said to be stationary or in steady state if the initial state is chosen
according to the stationary probabilities.

If a Markov Chain has long-term distribution, it depends a priori not only


on P , but also on (0) . Indeed, let n goes to in…nity in the equation (1:4),
we obtain
= (0) P
where P designes, if it exist, the limit of P n when n goes to in…nity.

Theorem 1.31 The distribution (n) of a Markov chain goes to a limit


independent of the distribution (0) if and only if the matrix P n goes to a
limit P whose all rows are identical to each other and, moreover, identical
to :

Remark 1.32 Given the limiting distribution f j ; j = 0; 1; 2; ;n 1g ex-


ists, we can obtain it by solving the stationary equations

X
n 1
= P and i =1
i=0

where = ( 0; 1 ; :::; n 1) :

Remark 1.33 The same result holds for in…nite state DTMCs.

Example 1.34 (Repair facility problem with cost) Consider again the
Repair facility problem represented by the …nite-state DTMC. The help desk
is trying to …gure out how much to charge me for maintaining my machine.
They …gure that it costs them 3000 DA every day that my machine is in
repair. What will my annual repair bill be?

Example 1.35 We have to describe, if possible, the behavior of long-term


distributions of the states of three Markov chains, their transition matrices
are:
1.2. DISCRETE-TIME MARKOV CHAINS 15

1. 0 1
1=4 0 3=4
P = @ 1=4 1=4 1=2 A
1=4 1=2 1=4

2.
0 1
P =
1 0

3. 0 1
1 0 0 0
B 1=2 0 1=2 0 C
P =B C
@ 0 1=2 0 1=2 A
0 0 0 1

The three previous examples illustrate the possible behavior of the long-
term distribution of states in a Markov chain:

1. The powers of P converge to a stochastic matrix P whose rows are all


equal to the same probability vector P , and the distribution of chain
states also converges to , independently of the initial distribution
(0)
:

2. The powers of P do not converge, nor does the distribution of chain


states.

3. The powers of P converge to a stochastic matrix P whose rows are


not all equal. The distribution of chain states admits a limit when n
tends to in…nity, but this limit depends on the initial distribution (0) :

You might also like