Discrete Finite Markov Chains Notes
Discrete Finite Markov Chains Notes
1 Introduction
• A stochastic process is a a sequence of random variables {Xt }nt=0 .
• a markov chain is known for having the state at instant t noted Xt depending only on
the previous state Xt−1
• markov chains are called memoryless since they only depend on the previous state.
• MC are used in vision fields, ML(hidden markov models),random algorithms, and web
surfing...
• in a markov chain the sum of edges going out of a node should always be equal to 1.
• if the probability of the transition i → j is the same as j → i for all the nodes than the
graph is undirected otherwise it is directed
1.1 Example
given a betting game where the player puts 1$ at each step, the player begins from an initial
position 0$<i<1000$ , if the player reaches 0$ or 1000$ the game ends, if we model this game
as a markov chain we get the following graph bellow, p represents the probability of gaining
the bet(+1$) while q represents the probability of losing the bet(-1$), the absorbing states
are the final states of the system .
Bensalem Mohammed Abderrahmane
• the TPM is always square with non negative entries and each row summing to 1
2 Stationary Distribution
• A stationary distribution for a markov distribution is a random vector of size |S| where
S is the set of states, this vector represents the behavior of the markov chain as time
progresses.
• Consider a weather model with three possible states: sunny (S), cloudy (C), and rainy
(R). We’ll represent the transition probabilities between these states using the following
transition matrix P :
0.6 0.3 0.1
P = 0.2 0.6 0.2
0.1 0.4 0.5
Bensalem Mohammed Abderrahmane
• This matrix represents the probabilities of transitioning from one weather state to
another over a certain time period. For example, the entry in the first row and first
column (0.6) represents the probability of transitioning from a sunny day to another
sunny day.
• Now, let’s find the stationary distribution for this weather model. We’re looking for a
probability distribution π = (πS , πC , πR ) such that π = π · P .
To find π, we can solve the equation π · P = π for π.
π·P =π
• we solve it and find that π = (0.4, 0.3, 0.3). which means that as markov chain
progresses we will get a sunny day 40% of the time, a rainy day or cloudy 30% of
the time
• Irreducibility and aperiodicity are two important properties of Markov chains that play
a key role in determining their behavior and whether they possess a unique stationary
distribution.
• A Markov chain is irreducible if it is possible to reach any state from any other state
with positive probability in a finite number of steps. In other words, there are no
"islands" or disconnected parts of the state space that cannot be reached from other
parts. If a Markov chain is irreducible, it means that there is a single, unified system
with a common long-term behavior.
• For example, in the weather model example, if it were impossible to transition from
a rainy day to a sunny day, the Markov chain would not be irreducible because there
would be no way to reach certain states from others. However, if transitions between
all states are possible, the chain would be irreducible.
• A Markov chain is said to be aperiodic if the greatest common divisor (GCD) of the
lengths of all cycles in the chain is 1. Aperiodicity essentially means that the chain
does not exhibit any predictable periodic behavior. In an aperiodic chain, states do not
repeat in a fixed pattern.
Bensalem Mohammed Abderrahmane
• For example, if a Markov chain representing the days of the week had a cycle where
it alternated between Monday and Tuesday every two days, it would not be aperiodic
because it repeats in a predictable pattern. However, if transitions between states occur
in a way that does not follow a fixed pattern, the chain would be aperiodic.
• for figure 3 the states B and C are recurrent while the state A is transient thus the MC
is transient
• meaning if Xt = u then τu = arg mint̂ P [Xt+t̂ = u|Xt = u] = 1 and E[τu |X0 = u] < ∞
• a recurrent state that isn’t positively recurrent is called null recurrent meaning that it
takes an infinite amount of time to loop back.
Bensalem Mohammed Abderrahmane
• for this example if p>1/2 meaning that the probability of moving forward is greater
than going back then all the states are transient
• if p<1/2 meaning that the probability of going back (looping) is greater than foreword
then all the states are positively recurrent
• if p=1/2 meaning there is equal chances for moving forward or backwards this means
that the state will loop but it may take an infinite amount of time this it is null
recurrent.
4 Periodicity
• A state is said to be ergodic if it is aperiodic and positive recurrent. A Markov chain
is ergodic if all of its states are ergodic.
• if d(sj )>1 we say that sj is periodic otherwise if d(sj )=1 we say that sj is aperiodic
• the sets of nodes {A,B},{C,D} is called a recurrent classes and denotes R1 = {A, B},periodic
and R2 = {C, D} aperiodic.
1 0 2 0 1 3 1 0
P = P = P = ...
0 1 1 0 0 1
• for the first MC that is periodic there is no limit for P and but it has a single stationary
distribution π = (1/2, 1/2)T
5 Aperiodicity
• if a markov chain contains a single reflexive transition Pi,i > 0 then it is aperiodic
• if Pi,i
t
> 0 and Pi,i
k
> 0 and gcd(t, k) = 1 then it is aperiodic
• for all element ti belonging to a transient class the corresponding stationary distribution
element πi = 0
6 Reducibility
• the above MC model is sensitive to the initial state, the long term behavior of the chain
depends a lot on the initial state
• if from any state in MC we can reach the other state in a positive finite step we call
that MC irreducible
• in a finite MC if all the states are irreducible than they are all positively recurrent
• it is important to note that the entires for a regular matrix should be strictly positive
• recall that a priodic markov chain does not necessarily have a single stationary distribution
and that the element πi isnt́ necessarily the limit
• limt→∞ P t exists and it’s rows are copies of the stationary distribution