Chapter 3:
Basic Probability
and Statistics
Outline
3.1 Introduction
3.2 Probability and random variables, various properties,
stochastic process
3.3 Estimation of means, variance and correlation.
3.4 Null hypothesis, type I error, type II error, power of test
Compiled By: Ashenafi K.(ashukass@gmail.com)
3.1 Introduction
– The completion of a successful simulation study involves much more than constructing a
flowchart of the system under study, translating the flowchart into a computer “program,” and
then making one or a few replications of each proposed system configuration.
– The use of probability and statistics is such an integral part of a simulation study that every
simulation modeling team should include at least one person who is thoroughly trained in such
techniques.
– In particular, probability and statistics are needed to
– understand how to model a probabilistic system,
– validate the simulation model,
– choose the input probability distributions,
– generate random samples from these distributions,
– perform statistical analyses of the simulation output data, and
– design the simulation experiments. Compiled By: Ashenafi K.(ashukass@gmail.com)
3.2. Probability and random variables, various properties, stochastic process
– An experiment is a process whose outcome is not known with certainty.
– The set of all possible outcomes of an experiment is called the sample space and is
denoted by S.
– The outcomes themselves are called the sample points in the sample space.
– EXAMPLE 3.1. If the experiment consists of flipping a coin, then
S = {H, T}
where the symbol { } means the “set consisting of,” and “H” and “T” mean that the
outcome is a head and a tail, respectively.
– EXAMPLE 3.2. If the experiment consists of tossing a die, then
S = {1, 2, . . . , 6}
where the outcome i means that i appeared on the die, i = 1, 2, . . . , 6.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– A random variable is a function (or rule) that assigns a real number (any number
greater than -∞ and less than ∞) to each point in the sample space S.
– EXAMPLE 4.3. Consider the experiment of rolling a pair of dice. Then
S = {(1, 1), (1, 2), . . . , (6, 6)}
where (i, j) means that i and j appeared on the first and second die,
respectively. If X is the random variable corresponding to the sum of the two
dice, then X assigns the value 7 to the outcome (4, 3).
– EXAMPLE 4.4. Consider the experiment of flipping two coins. If X is the random
variable corresponding to the number of heads that occur, then X assigns the value
1 to either the outcome (H, T) or the outcome (T, H).
– In general, we denote random variables by capital letters such as X, Y, Z and the
values that random variables take on by lowercase letters such as x, y, z.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– The distribution function (sometimes called the cumulative distribution function) F(x) of
the random variable X is defined for each real number x as follows:
F(x) = P(X ≤ x) for - ∞ < x < ∞
where P(X ≤ x) means the probability associated with the event {X ≤ x}.
– Thus, F(x) is the probability that, when the experiment is done, the random variable X will
have taken on a value no larger than the number x.
– A distribution function F(x) has the following properties:
1. 0 ≤ F(x) ≤ 1 for all x.
2. F(x) is nondecreasing [i.e., if x1 < x2, then F(x1) ≤ F(x2)].
3. lim 𝐹(𝑥) = 1 and 𝑙𝑖𝑚 𝐹(𝑥) = 0 (since X takes on only finite values)
𝑥→∞ 𝑥→−∞
Compiled By: Ashenafi K.(ashukass@gmail.com)
– A random variable X is said to be discrete if it can take on at most a countable number of
values, say, x1, x2, . . . .
– (“Countable” means that the set of possible values can be put in a one-to-one
correspondence with the set of positive integers. An example of an uncountable set is all
real numbers between 0 and 1.)
– Thus, a random variable that takes on only a finite number of values x1, x2, . . . , xn is
discrete.
– The probability that the discrete random variable X takes on the value xi is given by
p(xi) = P(X = xi) for i = 1, 2, . . .
and we must have
σ∞
𝑖=1 𝑝(xi) = 1
where the summation means add together p(x1), p(x2), . . . .
Compiled By: Ashenafi K.(ashukass@gmail.com)
– All probability statements about X can be computed (at least in principle) from p(x), which
is called the probability mass function for the discrete random variable X.
– If I = [a, b], where a and b are real numbers such that a ≤ b, then
P(X ∈ I) = σ𝑎≤𝑥𝑖≤𝑏 𝑝(𝑥𝑖)
where the symbol ∈ means “contained in” and the summation means add
together 𝑝(𝑥𝑖) for all 𝑥𝑖 such that a ≤ 𝑥𝑖 ≤ b.
– The distribution function F(x) for the discrete random variable X is given by
F(X) = σ𝑥𝑖<𝑥 𝑝(𝑥𝑖) for all -∞ < x < ∞
EXAMPLE 3.3. A manufacturing system produces parts that then must be inspected for
quality. Suppose that 90 percent of the inspected parts are good (denoted by 1) and 10
percent are bad and must be scrapped (denoted by 0). If X denotes the outcome of inspecting
a part, then X is a discrete random variable with p(0) = 0.1 and p(1) = 0.9.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– EXAMPLE 3.4. A company that sells a single product would like to decide how many items it should have
in inventory for each of the next n months (n is a fixed input parameter). The times between demands
are IID exponential random variables with a mean of 0.1 month. The sizes of the demands, D, are IID
random variables (independent of when the demands occur), with
1
1 𝑊. 𝑃.
6
1
2 𝑊. 𝑃. 3
D= 1
3 𝑊. 𝑃. 3
1
4 𝑊. 𝑃. 6
where w.p. is read “with probability.”
– The size of the demand for the product is a discrete random variable X that takes on the values 1, 2, 3, 4
1 1 1 1
with respective probabilities 6 , 3 , 3 , 6 .
1 1 2
P(2≤ 𝑋 ≤ 3) = 𝑝(2) + 𝑝(3) = +3=3
3
Compiled By: Ashenafi K.(ashukass@gmail.com)
Figure: p(x) for the demand-size random variable X. Figure: F(x) for the demand-size random variable X.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– We now consider random variables that can take on an uncountably infinite number of
different values (e.g., all nonnegative real numbers).
– A random variable X is said to be continuous if there exists a nonnegative function f (x)
such that for any set of real numbers B (e.g., B could be all real numbers between 1 and 2),
∞
𝑃 𝑋 ∈ 𝐵 = 𝑥𝑑 𝑥 𝑓 𝐵and −∞ 𝑓 𝑥 𝑑𝑥 = 1
– Thus, the total area under f (x) is 1.
– Also, if X is a nonnegative random variable, as is often the case in simulation applications,
the second range of integration is from 0 to ∞.
– All probability statements about X can (in principle) be computed from f (x), which is called
the probability density function for the continuous random variable X.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– For a discrete random variable X, p(x) is the actual probability associated with the value x.
– However, f (x) is not the probability that a continuous random variable X equals x.
– For any real number x,
𝑥
𝑃 𝑋 = 𝑥 = 𝑃 𝑋 ∈ 𝑥, 𝑥 = න 𝑓 𝑦 𝑑𝑦 = 0
𝑥
– Since the probability associated with each value x is zero, we now give an interpretation to
f (x).
– If x is any number and ∆x > 0, then
𝑥+∆𝑥
𝑃 𝑋 ∈ 𝑥, 𝑥 + ∆𝑥 =න 𝑓 𝑦 𝑑𝑦
𝑥
which is the area under f (x) between x and x + ∆x, as shown in the following figure
Compiled By: Ashenafi K.(ashukass@gmail.com)
Figure: Interpretation of the probability density function f (x).
Compiled By: Ashenafi K.(ashukass@gmail.com)
– It follows that a continuous random variable X is more likely to fall in an interval above
which f (x) is “large” than in an interval of the same width above which f (x) is “small.”
– The distribution function F(x) for a continuous random variable X is given by
𝑥
𝐹 𝑥 = 𝑃 𝑋 ∈ −∞, 𝑥 = න 𝑓 𝑦 𝑑𝑦 𝑓𝑜𝑟 𝑎𝑙𝑙 − ∞ < 𝑥 < ∞
−∞
– Thus (under some mild technical assumptions), f(x) = F’(x) [the derivative of F(x)].
Furthermore, if I = [a, b] for any real numbers a and b such that a < b, then
𝑏
𝑃 𝑋 ∈ 𝐼 = න 𝑓 𝑦 𝑑𝑦 = 𝐹 𝑏 − 𝐹(𝑎)
𝑎
where the last equality is an application of the fundamental theorem of calculus, since F’(x) =
f (x).
Compiled By: Ashenafi K.(ashukass@gmail.com)
– EXAMPLE 3.5. A uniform random variable on the interval [0, 1] has the following
probability density function:
1 𝑖𝑓0 ≤ 𝑥 ≤ 1
𝑓 𝑥 =ቊ
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
– Furthermore, 𝑖𝑓0 ≤ 𝑥 ≤ 1, then
𝑥
𝑥
𝐹 𝑥 = න 𝑓 𝑦 𝑑𝑦 = න 1 𝑑𝑦 = 𝑥
0
0
Compiled By: Ashenafi K.(ashukass@gmail.com)
Fig. f (x) for a uniform random variable on [0, 1]. Fig. F(x) for a uniform random variable on [0, 1].
Compiled By: Ashenafi K.(ashukass@gmail.com)
– Finally, if 0 ≤ x < x + ∆x ≤ 1, then
𝑥+∆x
𝑃(𝑋 ∈ [𝑥, 𝑥 + ∆x ]) = 𝑥 𝑓 𝑦 𝑑𝑦
= 𝐹(𝑥 + ∆x) – 𝐹(x)
= 𝑥 + ∆𝑥 − 𝑥
= ∆𝑥
– It follows that a uniform random variable on [0, 1] is equally likely to fall in any
interval of length ∆𝑥 between 0 and 1, which justifies the name “uniform.”
– The uniform random variable on [0, 1] is fundamental to simulation, since it is the
basis for generating any random quantity on a computer
Compiled By: Ashenafi K.(ashukass@gmail.com)
– So far in this chapter we have considered only one random variable at a time, but
in a simulation one must usually deal with n (a positive integer) random variables
X1, X2, . . . , Xn simultaneously.
– For example, in the queueing model, we were interested in the (input) service-time
random variables S1, S2, . . . , Sn and the (output) delay random variables D1, D2, . . .
, Dn .
– In the discussion that follows, we assume for expository convenience that n = 2 and
that the two random variables in question are X and Y.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– If X and Y are discrete random variables, then let
p(x, y) = P(X = x, Y = y) for all x, y
where p(x, y) is called the joint probability mass function of X and Y.
– In this case, X and Y are independent if
p(x, y) = pX(x)pY (y) for all x, y
where
pX(x) = σ𝑎𝑙𝑙 𝑦 𝑝 𝑥, 𝑦
pY(y) = σ𝑎𝑙𝑙 𝑦 𝑝 𝑥, 𝑦
are all (marginal) probability mass functions of X and Y.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– EXAMPLE 1: Suppose that X and Y are jointly discrete random variables with
𝑥𝑦
for x = 1, 2 and y = 2, 3, 4
p(x, y) = ൝ 27
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Then
𝑥𝑦 𝑥
pXx = σ4𝑦=2 = for x = 1, 2
27 3
𝑥𝑦 𝑦
pYy = σ2𝑥=1 = for y = 2, 3, 4
27 9
Since p(x,y) = xy/27 = pX(x)pY(y) for all x, y, the random variables X and Y are
independent.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– EXAMPLE 2: Suppose that 2 cards are dealt from a deck of 52, without replacement.
Let the random variables X and Y be the number of aces and kings that occur, both of
which have possible values of 0, 1, 2.
It can be shown that
4 48
pX(1) = pY(1) = 2( )( )
52 51
and
4 4
p(1, 1) = 2( )( )
52 51
since
4 4 4 48
p(1, 1) = 2( )( ) ≠ 2( )2 ( )2
52 51 52 51
it follows that X and Y are not independent.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– The random variables X and Y are jointly continuous if there exists a nonnegative
function f (x, y), called the joint probability density function of X and Y, such that
for all sets of real numbers A and B,
𝑃 𝑋 ∈ 𝐴, 𝑌 ∈ 𝐵 = 𝑥 𝑓 𝐴 𝐵, 𝑦 𝑑𝑥𝑑𝑦
– In this case, X and Y are independent if
f(x, y) = fX(x)fY(y) for all x, y
where
∞
𝑓𝑋 𝑥 = −∞ 𝑓 𝑥, 𝑦 𝑑𝑦
∞
𝑓𝑌 𝑦 = −∞ 𝑓 𝑥, 𝑦 𝑑𝑥
are the (marginal) probability density functions of X and Y, respectively.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– Intuitively, the random variables X and Y (whether discrete or continuous) are independent if
knowing the value that one random variable takes on tells us nothing about the distribution of the
other.
– Also, if X and Y are not independent, we say that they are dependent.
– We now consider once again the case of n random variables X1, X2, . . . , Xn, and we discuss some
characteristics of the single random variable Xi and some measures of the dependence that may
exist between two random variables Xi and Xj.
– The mean or expected value of the random variable Xi (where i = 1, 2, . . . , n) will be denoted by 𝜇i
or E(Xi) and is defined by
σ∞
𝑗=1 𝑥𝑖 𝑝𝑥𝑖 𝑥𝑗 𝑖𝑓 𝑋𝑖 𝑖𝑠 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
𝜇𝑖 = ቐ ∞
−∞ 𝑥𝑓𝑥𝑖 𝑥 𝑑𝑥 𝑖𝑓 𝑋𝑖 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
– The mean is one measure of central tendency in the sense that it is the center of gravity
Compiled By: Ashenafi K.(ashukass@gmail.com)
– Let c or ci denote a constant (real number). Then the following are important properties of means:
1. E(cX) = cE(X).
2. E(σ𝑛𝑖=1 𝑐𝑖 𝑋𝑖 ) = σ𝑛𝑖=1 𝑐𝑖 𝐸(𝑋𝑖 ) even if the Xi’s are dependent.
– The median x0.5 of the random variable Xi, which is an alternative measure of central tendency, is
defined to be the smallest value of x such that 𝐹𝑋𝑖 (x) ≥ 0.5.
– If Xi is a continuous random variable, then 𝐹𝑋𝑖 (x0.5) = 0.5
– The median may be a better measure of central tendency than the mean when Xi can take on very
large or very small values, since extreme values can greatly affect the mean even if they are very
unlikely to occur; such is not the case with the median.
– EXAMPLE. Consider a discrete random variable X that takes on each of the values, 1, 2, 3, 4, and 5
with probability 0.2. Clearly, the mean and median of X are 3. Consider now a random variable Y
that takes on each of the values 1, 2, 3, 4, and 100 with probability 0.2. The mean and median of Y
are 22 and 3, respectively. Note that the median is insensitive to this change in the distribution.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– The mode m of a continuous (discrete) random variable Xi, which is another
alternative measure of central tendency, is defined to be that value of x that
maximizes 𝑓𝑋𝑖 (x)[𝑝𝑋𝑖 (x)].
– Note that the mode may not be unique for some distributions.
FIG.
The median x0.5 and mode m
for a continuous random
variable.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– The variance of the random variable Xi will be denoted by 𝜎𝑖2 or Var(Xi) and is
defined by
𝜎𝑖2 = 𝐸 𝑋𝑖 − 𝜇𝑖 2
= 𝐸 𝑋𝑖2 − 𝜇𝑖2
– The variance is a measure of the dispersion of a random variable about its mean
– The larger the variance, the more likely the random variable is to take on values far
from its mean.
FIG.
Density functions for continuous random variables with large and small
variances.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– The variance has the following properties:
1. Var(X) ≥ 0.
2. Var(cX) = c2Var(X).
3. Var(σ𝑛𝑖=1 𝑋𝑖 ) = σ𝑛𝑖=1 𝑉𝑎𝑟(𝑋𝑖 ) if the Xi’s are independent (or uncorrelated).
– The standard deviation of the random variable Xi is defined to be 𝜎𝑖 = 𝜎𝑖2 .
– The standard deviation can be given the most definitive interpretation when Xi has a normal distribution.
– In particular, suppose that Xi has a normal distribution with mean 𝜇𝑖 and standard deviation 𝜎𝑖 .
– In this case, for example, the probability that Xi is between 𝜇𝑖 - 1.96𝜎𝑖 and 𝜇𝑖 + 1.96𝜎𝑖 is 0.95.
– We now consider measures of dependence between two random variables.
– The covariance between the random variables Xi and Xj (where i = 1, 2, . . . , n; j = 1, 2, . . . , n), which is a
measure of their (linear) dependence, will be denoted by Cij or Cov(Xi, Xj) and is defined by
𝐶𝑖𝑗 = 𝐸 𝑋𝑖 − 𝜇𝑖 𝑋𝑗 − 𝜇𝑗 = 𝐸 𝑋𝑖 𝑋𝑗 − 𝜇𝑖 𝜇𝑗 3.1
– Note that covariances are symmetric, that is, 𝐶𝑖𝑗 = 𝐶𝑗𝑖 and that if i = j, then 𝐶𝑖𝑗 = 𝐶𝑗𝑖 = 𝜎𝑖2 .
Compiled By: Ashenafi K.(ashukass@gmail.com)
– If 𝐶𝑖𝑗 = 0, the random variables Xi and Xj are said to be uncorrelated.
– It is easy to show that if Xi and Xj are independent random variables, then 𝐶𝑖𝑗 = 0.
– In general, though, the converse is not true.
– However, if Xi and Xj are jointly normally distributed random variables with 𝐶𝑖𝑗 = 0, then they are also independent.
– If 𝐶𝑖𝑗 > 0, then Xi and Xj are said to be positively correlated.
– In this case, 𝑋𝑖 > 𝜇𝑖 and 𝑋𝑗 > 𝜇𝑗 tend to occur together, and 𝑋𝑖 < 𝜇𝑖 and 𝑋𝑗 < 𝜇𝑗 also tend to occur together [see
Eq. (3.1)].
– Thus, for positively correlated random variables, if one is large, the other is likely to be large also.
– If 𝐶𝑖𝑗 < 0, then Xi and Xj are said to be negatively correlated.
– In this case, 𝑋𝑖 > 𝜇𝑖 and 𝑋𝑗 < 𝜇𝑗 tend to occur together, and 𝑋𝑖 < 𝜇𝑖 and 𝑋𝑗 > 𝜇𝑗 also tend to occur together.
– Thus, for negatively correlated random variables, if one is large, the other is likely to be small.
Compiled By: Ashenafi K.(ashukass@gmail.com)
– If X1, X2, . . . , Xn are simulation output data (for example, Xi might be the delay Di for the queueing example), we shall
often need to know not only the mean 𝜇𝑖 and variance 𝜎𝑖2 for i = 1, 2, . . . , n, but also a measure of the dependence
between Xi and Xj for i ≠ j.
– However, the difficulty with using Cij as a measure of dependence between Xi and Xj is that it is not dimensionless,
which makes its interpretation troublesome.
– If Xi and Xj are in units of minutes, say, then Cij is in units of minutes squared.
– As a result, we use the correlation 𝜌𝑖𝑗 , defined by
𝐶𝑖𝑗
𝜌𝑖𝑗 = i, j = 1, 2, …, n (3.2)
𝜎𝑖2 𝜎𝑗2
– as our primary measure of the (linear) dependence between Xi and Xj. [Cor(Xi, Xj).]
– Since the denominator in Eq. (3.2) is positive, it is clear that 𝜌𝑖𝑗 has the same sign as Cij.
– Furthermore, it can be shown that -1 ≤ 𝜌𝑖𝑗 ≤ 1 for all i and j .
– If 𝜌𝑖𝑗 is close to +1, then Xi and Xj are highly positively correlated.
– On the other hand, if 𝜌𝑖𝑗 is close to -1, then Xi and Xj are highly negatively correlated.
Compiled By: Ashenafi K.(ashukass@gmail.com)
Stochastic Process
– A stochastic process is a collection of “similar” random variables ordered over time, which are all defined
on a common sample space.
– The set of all possible values that these random variables can take on is called the state space.
– If the collection is X1, X2, . . . , then we have a discrete-time stochastic process.
– If the collection is {X(t), t ≥ 0}, then we have a continuous-time stochastic process.
– EXAMPLE 1. Consider a single-server queueing system, e.g., the MIMI1 queue, with IID interarrival times
A1, A2, . . . , IID service times S1, S2, . . . , and customers served in a FIFO manner. Relative to the
experiment of generating the random variates A1, A2, . . . and S1, S2, . . . , one can define the discrete-time
stochastic process of delays in queue D1, D2, . . . as follows:
D1 = 0
Di+1 = max{Di + Si - Ai+1, 0} for i = 1, 2, . . .
Thus, the simulation maps the input random variables (i.e., the Ai’s and the Si’s) into the output
stochastic process D1, D2, . . . of interest. Here, the state space is the set of nonnegative real
numbers. Note that Di and Di+1 are positively correlated. (Why?)
Compiled By: Ashenafi K.(ashukass@gmail.com)
3.3 Estimation of means, variance and correlation
– Suppose that X1, X2, . . . , Xn are IID random variables (observations) with finite population mean 𝜇
and finite population variance 𝜎 2 and that our primary objective is to estimate 𝜇; the estimation of
𝜎 2 is of secondary interest.
– Then the sample mean
σ𝑛
𝑖=1 𝑋𝑖
𝑋ത 𝑛 = (3.3)
𝑛
is an unbiased (point) estimator of 𝜇; that is, E[𝑋ത 𝑛 ] = 𝜇
– Intuitively, 𝑋ത 𝑛 being an unbiased estimator of 𝜇 means that if we perform a very large number of
independent experiments, each resulting in an 𝑋ത 𝑛 , the average of the 𝑋ത 𝑛 ’s will be 𝜇
– Similarly, the sample variance
σ𝑛 ത
𝑖=1[𝑋𝑖 −𝑋 𝑛 ]
2
𝑆2 𝑛 = (3.4)
𝑛−1
is an unbiased estimator of 𝜎 2 , since E[𝑆 2 (n)] = 𝜎 2
ത and 𝑆 2 𝑛 are sometimes denoted by 𝜇ො and 𝜎ො 2 , respectively.
– Note that the estimators 𝑋(n)
Compiled By: Ashenafi K.(ashukass@gmail.com)
ത as an estimator of 𝜇 without any additional information
– The difficulty with using 𝑋(n)
ത is to 𝜇.
is that we have no way of assessing how close 𝑋(n)
ത is a random variable with variance Var[𝑋(n)
– Because 𝑋(n) ത ], on one experiment 𝑋(n)
ത
ത may differ from 𝜇 by a large amount. (See the
may be close to 𝜇 while on another 𝑋(n)
next Fig., where the Xi’s are assumed to be continuous random variables.)
ത as an estimator of m is to construct a
– The usual way to assess the precision of 𝑋(n)
confidence interval for 𝜇. However, the first step in constructing a confidence interval
is to estimate Var[X(n)].
Compiled By: Ashenafi K.(ashukass@gmail.com)
– Since
1
𝑉𝑎𝑟 𝑋ത n = Var( σ𝑛𝑖=1 𝑋𝑖 )
𝑛
1 𝑛
= 𝑉𝑎𝑟( σ 𝑖=1 𝑋𝑖 )
𝑛2
1
= 2 σ𝑛𝑖=1 𝑉𝑎𝑟(𝑋𝑖 ) (because the Xi’s are independent)
𝑛
1 2 𝜎2
= 2 n𝜎 = (3.5)
𝑛 𝑛
– it is clear that, in general, the bigger the sample size n, the closer 𝑋ത n should be to 𝜇.
– Furthermore, an unbiased estimator of Var[𝑋ത n ] is obtained by replacing 𝜎 2 in Eq. (3.5) by
S2(n), resulting in
𝑆 2 (𝑛) σ𝑛 ത
𝑖=1[𝑋𝑖 −𝑋(𝑛)]
2
𝑋ത 𝑛
𝑉𝑎𝑟 = =
𝑛 𝑛(𝑛−1)
Compiled By: Ashenafi K.(ashukass@gmail.com)
– Finally, note that if the Xi’s are independent, they are uncorrelated, and thus 𝜌𝑗 =
0 for j = 1, 2, . . . , n - 1.
– It has been our experience that simulation output data are almost always
correlated.
– Thus, the above discussion about IID observations is not directly applicable to
analyzing simulation output data.
Fig. Distributions of X(n) for
small and large n.
Compiled By: Ashenafi K.(ashukass@gmail.com)
Assignment
Null hypothesis, type I error, type II error, power of test
– Do research and reading
– Prepare a compiled note
– Put a valid reference
Compiled By: Ashenafi K.(ashukass@gmail.com)