KEMBAR78
Discrete Finite Markov Chains Notes | PDF | Markov Chain | Discrete Mathematics
0% found this document useful (0 votes)
51 views7 pages

Discrete Finite Markov Chains Notes

The document discusses stochastic processes and Markov chains. It defines key concepts such as discrete/finite stochastic processes, memoryless property of Markov chains, and transition probability matrices. The document also covers stationary distributions, recurrence/transience, and periodicity of Markov chains.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views7 pages

Discrete Finite Markov Chains Notes

The document discusses stochastic processes and Markov chains. It defines key concepts such as discrete/finite stochastic processes, memoryless property of Markov chains, and transition probability matrices. The document also covers stationary distributions, recurrence/transience, and periodicity of Markov chains.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Bensalem Mohammed Abderrahmane

1 Introduction
• A stochastic process is a a sequence of random variables {Xt }nt=0 .

• if n ∈ N we call it a discrete stochastic process.

• if n < inf we call it a finite stochastic process.

• random events are modeled using stochastic processes .

• a markov chain is a model defined on a set of states S and a transition probability


matrix P.

• a markov chain is known for having the state at instant t noted Xt depending only on
the previous state Xt−1

P [Xt = v|X0 = i, . . . , Xt−1 = j] = P [Xt = v|Xt−1 = j] (1)

• markov chains are called memoryless since they only depend on the previous state.

• MC are used in vision fields, ML(hidden markov models),random algorithms, and web
surfing...

• Given a transition probability matrix P, the probability of the transition i → j in a


single step is denoted Pi,j which is P [Xt = j|Xt−1 = i]

• the probability of the transition i → j in a k steps is denoted Pi,j


k

• in a markov chain the sum of edges going out of a node should always be equal to 1.

• if the probability of the transition i → j is the same as j → i for all the nodes than the
graph is undirected otherwise it is directed

1.1 Example
given a betting game where the player puts 1$ at each step, the player begins from an initial
position 0$<i<1000$ , if the player reaches 0$ or 1000$ the game ends, if we model this game
as a markov chain we get the following graph bellow, p represents the probability of gaining
the bet(+1$) while q represents the probability of losing the bet(-1$), the absorbing states
are the final states of the system .
Bensalem Mohammed Abderrahmane

Figure 1: betting game illustration

• the TPM is always square with non negative entries and each row summing to 1

• if we want to know the probability of transition from A to C is two steps P [X2 =


C|X0 = A] then the calculation is as follows :
 2 1  2 1 1
0 12

0 3 3 0 3 3 2 1
P 2 =  12 0 12   12 0 12  =  14 13 12
5  2
=⇒ PA,C = (2)
1 1 1 1 1 1 5 2
2
0 2 2
0 2 4 3 12

Figure 2: transition matrix

2 Stationary Distribution
• A stationary distribution for a markov distribution is a random vector of size |S| where
S is the set of states, this vector represents the behavior of the markov chain as time
progresses.

• Consider a weather model with three possible states: sunny (S), cloudy (C), and rainy
(R). We’ll represent the transition probabilities between these states using the following
transition matrix P :

 
0.6 0.3 0.1
P = 0.2 0.6 0.2
0.1 0.4 0.5
Bensalem Mohammed Abderrahmane

• This matrix represents the probabilities of transitioning from one weather state to
another over a certain time period. For example, the entry in the first row and first
column (0.6) represents the probability of transitioning from a sunny day to another
sunny day.

• Now, let’s find the stationary distribution for this weather model. We’re looking for a
probability distribution π = (πS , πC , πR ) such that π = π · P .
To find π, we can solve the equation π · P = π for π.

π·P =π

• Using matrix multiplication, this equation becomes:


 
 0.6 0.3 0.1 
πS πC πR 0.2 0.6 0.2 = πS πC πR
0.1 0.4 0.5
|S|
X
πi = 1
i=1

• we solve it and find that π = (0.4, 0.3, 0.3). which means that as markov chain
progresses we will get a sunny day 40% of the time, a rainy day or cloudy 30% of
the time

• the sum of elements of the stationary distribution π should always equal 1

• Irreducibility and aperiodicity are two important properties of Markov chains that play
a key role in determining their behavior and whether they possess a unique stationary
distribution.

• A Markov chain is irreducible if it is possible to reach any state from any other state
with positive probability in a finite number of steps. In other words, there are no
"islands" or disconnected parts of the state space that cannot be reached from other
parts. If a Markov chain is irreducible, it means that there is a single, unified system
with a common long-term behavior.

• For example, in the weather model example, if it were impossible to transition from
a rainy day to a sunny day, the Markov chain would not be irreducible because there
would be no way to reach certain states from others. However, if transitions between
all states are possible, the chain would be irreducible.

• A Markov chain is said to be aperiodic if the greatest common divisor (GCD) of the
lengths of all cycles in the chain is 1. Aperiodicity essentially means that the chain
does not exhibit any predictable periodic behavior. In an aperiodic chain, states do not
repeat in a fixed pattern.
Bensalem Mohammed Abderrahmane

• For example, if a Markov chain representing the days of the week had a cycle where
it alternated between Monday and Tuesday every two days, it would not be aperiodic
because it repeats in a predictable pattern. However, if transitions between states occur
in a way that does not follow a fixed pattern, the chain would be aperiodic.

3 Recurrence and transience


• A state v is recurrent if the probability of going back to it is 1.

• meaning ∃t; P [Xt0 +t = v|Xt0 = v] = 1

• if all states in MC are recurrent we say that MC is recurrent

• A state v is transient if the probability of going back to it is 0.

• meaning ∀t; P [Xt0 +t = v|Xt0 = v] = 0

• if MC contains a single transient state than it is transient

Figure 3: transience recurrence illustration

• for figure 3 the states B and C are recurrent while the state A is transient thus the MC
is transient

• a Positive recurrent state is a state that we can go back to in a finite time

• meaning if Xt = u then τu = arg mint̂ P [Xt+t̂ = u|Xt = u] = 1 and E[τu |X0 = u] < ∞

• a recurrent state that isn’t positively recurrent is called null recurrent meaning that it
takes an infinite amount of time to loop back.
Bensalem Mohammed Abderrahmane

• for this example if p>1/2 meaning that the probability of moving forward is greater
than going back then all the states are transient

• if p<1/2 meaning that the probability of going back (looping) is greater than foreword
then all the states are positively recurrent

• if p=1/2 meaning there is equal chances for moving forward or backwards this means
that the state will loop but it may take an infinite amount of time this it is null
recurrent.

4 Periodicity
• A state is said to be ergodic if it is aperiodic and positive recurrent. A Markov chain
is ergodic if all of its states are ergodic.

• the periodicity of a state sj ∈ S is defined as :

d(sj ) = gcd{t ∈ Z + , Pstk ,sj > 0}

• if d(sj )>1 we say that sj is periodic otherwise if d(sj )=1 we say that sj is aperiodic

• if every state is periodic we say that MC is periodic otherwise it is aperiodic

• for state A we have the following possible loops {A → B → A}, {A → B → A → B →


A}, . . . if we count the number of steps we have to do in the loops we find {2,4,6,. . . },
now d(A)=gcd({2,4,6,. . . }) thus d(A)=2>1 so A is periodic

• similarly for state B d(B)=2>1 so B is periodic

• now for state C we can do {C → C}, {C → D → C}, {C → C → D → C}, ... so we


have d(C)=gcd({1,2,3,4. ..})=1 so it is aperiodic

• the same for D it is aperiodic

• the sets of nodes {A,B},{C,D} is called a recurrent classes and denotes R1 = {A, B},periodic
and R2 = {C, D} aperiodic.

• if a node E is unreachable we say it is a transient class and denoted T1 = {E}


Bensalem Mohammed Abderrahmane

• It is important that a markov chain be periodic for it to have a unique stationary


distribution this is because if it is periodic this means that limt→∞ P t does not exist

     
1 0 2 0 1 3 1 0
P = P = P = ...
0 1 1 0 0 1

• for the first MC that is periodic there is no limit for P and but it has a single stationary
distribution π = (1/2, 1/2)T

5 Aperiodicity
• if a markov chain contains a single reflexive transition Pi,i > 0 then it is aperiodic

• if Pi,i
t
> 0 and Pi,i
k
> 0 and gcd(t, k) = 1 then it is aperiodic

• formally speaking aperiodic MC is : ∃k x > 0 ∀x ∈ P k

• the number of unique stationary distribution π i is equal to the number of recurrent


classes

• for all element ti belonging to a transient class the corresponding stationary distribution
element πi = 0

6 Reducibility

• the above MC model is sensitive to the initial state, the long term behavior of the chain
depends a lot on the initial state

• if from any state in MC we can reach the other state in a positive finite step we call
that MC irreducible

• a MC with more than one unique stationary distribution is reducible (decomposed to


many communicating classes)

• a MC with a unique stationary distribution is irreducible (forms a single communicating


class)

• a MC is irreducible if its directed graph is strongly connected(any node can be reached


from any other node)
Bensalem Mohammed Abderrahmane

Figure 4: irreducibility illustration

• for a finite markov chain, an irreducible markov chain is called ergodic.

• a MC that is irreducible and with self loop is aperidoic

• in a finite MC if all the states are irreducible than they are all positively recurrent

• if MC is irreducible than MC has a single stationary distribution

7 Regular Markov Chain


• a regular matrix is a matrix for which ∃t; ∞ > t > 0; mi,j > 0; ∀mi,j ∈ M t

• a regular MC is a MC with regular P matrix

• A finite state markov is irreducible and aperiodic if and only if it is regular

• it is important to note that the entires for a regular matrix should be strictly positive

• recall that a priodic markov chain does not necessarily have a single stationary distribution
and that the element πi isnt́ necessarily the limit

8 Fundemental theoreme of markov chain


any finite aperiodic and irreducible markov chain (regular) has the following properties :

• the chain has a unique stationary distribution π

• limt→∞ P t exists and it’s rows are copies of the stationary distribution

9 Abosrbing markov chain


• a MC is absorbing if it contains atleast a single absorbing state

• once a system enters an absorbing state it never leaves it

• if u is an absorbing state then formally Pu,v = 0 ∀v ̸= u

You might also like