Advanced Mathematics and Statistics
Module 2 - Advanced Statistical Methods
An exercise on Bayesian statistics
iid
Exercise. Let X1 , . . . , Xn |σ ∼ f ( · |σ), where
r
1 σ n σ o
f (x|σ) = exp − (log(x))2 , x>0
x 2π 2
and σ is a positive quantity, whose prior distribution over R+ is a gamma with shape–rate
parameters (1, 2), i.e. p(σ) = 2e−2σ for σ > 0.
(a) Determine the posterior distribution of σ, given X1 = x1 , . . . , Xn = xn .
(b) Determine the Bayes estimator σ̂p of σ under a squared loss function.
iid
(c) Assuming σ fixed, i.e., X1 , . . . , Xn ∼ f ( · |σ), determine the MLE σ̂ of σ and show
that σ̂p /σ̂ → 1 in probability as n → +∞.
1
Solution
(a) Recall that the likelihood function of the data is
n
( )
1 n/2 σX
f (x1 , . . . , xn |σ) = Qn (σ/(2π)) exp − log(xi )2 .
i=1 xi 2
i=1
We may apply the Bayes theorem to determine the posterior:
p(σ|x1 , . . . , xn ) ∝ f (x1 , . . . , xn |σ) · p(σ)
n
( )
X
n/2 −1 2
∝ σ exp −σ 2 log(xi ) + 2
i=1
therefore
n
!
1X 2
σ̃|X1 = x1 , . . . , Xn = xn ∼ Gamma n/2 + 1, log(xi ) + 2 .
2
i=1
(b) The Bayes estimator of σ under a squared loss function is the posterior mean:
Z ∞
n+2
σ̂p = σp(σ|x1 , . . . , xn )dσ = Pn
0 4 + i=1 log(xi )2
where we used the fact that the mean of a gamma with parameters (a, b) equals a/b.
(c) Now we have to maximize the likelihood function:
n
( )
1 n/2 σX
L(σ) = f (x1 , . . . , xn |σ) = Qn (σ/(2π)) exp − log(xi )2 .
i=1 xi 2
i=1
For simplicity, we consider the log-likelihood function
n n
X n σX
ℓ(σ) = log(L(σ)) = − log(xi ) + log(σ/(2π)) − log(xi )2 .
2 2
i=1 i=1
It is easy to see that
∂ n
ℓ(σ) ≥ 0 iff σ ≤ Pn 2
∂σ i=1 log(xi )
therefore
n
σ̂ = Pn 2
i=1 log(Xi )
is the MLE of σ. In order to prove the convergence in probability, we use the consistency
p
of the MLE, indeed one has σ̂ −→ σ as n → +∞. As a consequence we obtain:
σ̂p n+2 1 n+2 1 p σ
= Pn 2
· = · −→ = 1
σ̂ 4 + i=1 log(Xi ) σ̂ 4 + n/σ̂ σ̂ σ
as n → +∞.