KEMBAR78
Prob | PDF | Set (Mathematics) | Function (Mathematics)
0% found this document useful (0 votes)
12 views33 pages

Prob

This document is a lecture on probability by Dr. Rijji Sen, aimed at Sem-III Statistics General students, providing an overview of key concepts and definitions in probability. It covers topics such as random experiments, basic terminology, classical and frequentist definitions of probability, and theorems related to conditional and independent events. The lecture serves as a revision tool and does not include numerical examples.

Uploaded by

S Mondal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views33 pages

Prob

This document is a lecture on probability by Dr. Rijji Sen, aimed at Sem-III Statistics General students, providing an overview of key concepts and definitions in probability. It covers topics such as random experiments, basic terminology, classical and frequentist definitions of probability, and theorems related to conditional and independent events. The lecture serves as a revision tool and does not include numerical examples.

Uploaded by

S Mondal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Probability

Probability

Lecture By:
Dr. Rijji Sen
Asstt. Professor
Department of Statistics
Behala College
Probability

Purpose

This video lecture is meant as a revision material for Sem -III Statistics General
Students. It may also be used as a quick run through of the Sem-I and Sem-II
syllabuses for Statistics General. This will definitely be helpful to all those students
who did not opt for Statistics as a minor paper in Sem-I and II. It is an overview of the
topic probability. Numericals have not been dealt here.
Probability

Introduction

If an experiment is repeated a no of times under essentially homogeneous and similar


kinds of conditions, then
Either the result or outcome is unique and certain. These are called deterministic
phenomena. Example: for a perfect gas PV = constant
The result may not be unique but it is one of the several possible outcomes.These
are called stochastic phenomena. Example: A random toss of coin may give H or
T.
Probability

Basic terminology

A Random Experiment is an experiment, trial, or observation that can be


repeated numerous times under the same conditions. The outcome of an
individual random experiment must be independent and identically distributed.
Two basic things must be kept in mind
All possible outcomes must be known
Outcome of a particular trial is unknown.
For example:If we roll a unbiased coin then the set of all possible outcomes is
[H, T ]
Outcome The result of an experiment.
Trial and Event Any particular performance of a random experiment is called a
trial. Event may be one particular trials or a no of trials that can be grouped
together due to some common characteristic. For example in the throw of a coin,
each of [1, 2, 3, 4, 5, 6] is the outcome of some trial and the event “the outcome is
odd” is [1, 3, 5].
Probability

Basic terminology

sample space The set of all possible outcomes of an experiment. For example,
when a coin is tossed its [H, T ]. The result of a trial of the random experiment
gives an outcome. Its called an elementary event or sample point. The totality of
all possible outcomes is called a sample space.
Different kinds of events
Favourable events
Exhaustive events
Mutually exclusive events
Independent Events
Equally Likely events.
Probability

Classical Definition of Probability

If a random experiment results in n exhaustive, mutually exclusive and equally likely


cases,out of which m are favourable to the occurence of an event A, then probability
of occurence of event A is
no. of favourable cases m
P(A) = = (1)
total no of cases n
Probability

Limitations of the classical definition

If the various outcomes of the random experiment are not equally likely or equally
probable.
If the exhaustive no. of outcomes of the experiment is infinie or unknown.
Probability

Frequentist Definition of probability

35.
Definition. (VON MISES). If an experiment is performied repeatedly under essentially
hamiogencous and identical conditions, tlhen the lintiting value of the ratio of the number of
tinesSTATISTICAL
the event occurs to the niumber of trials, as the number of trials becomes indefinitely
large, is called the probability of happening of the event, it being assiumed that the limit is
finite and unique.
in N trials an event E happens M times, then the probability of the
Svmbolically, if(OR
happening of E, denoted by P(E), is given by:
M .. (3-2)
P(E) =lim y
EMPIRICAL) N0o

PROBABILITy
Probability

Limitations

If an experiment is to be repeated a large no of times, the experimental


conditions may not remain constant.
The limit may not attain a unique value, however large N may be.
Probability

Overview of set theory

3.7.1. Sets and Elements of Sets. A set is a well-defined collection or aggregate


of all possible objects having given properties and specified according to a well
defined rule. The objects comprising a set are called elements, members or points of
etc. If x is an element of
the set. Sets are often denoted by capital letters, viz., A, B, C,
to A). If x is not a member of the set
the set A, we write symbolically x E A (x belongs
Sets are often described by describing the
A, we write x e A (x does not belong to A).
properties possessed by their members. Thus the set A of all non-negative rational
numbers with square less than 2 will be written as
A ={x: x rational, x 20, x2<2.
to the set B, i.e., if x e A > x e B then we say
Ifevery element of the set A belongs B (A is contained in B) or
that A is asubset of B and write symbolically A C
to be equal or identical if A c B and
B A (B contains A). Two sets A and B are said
BCA and writeA = B or B =A.
at all and is
A null or an empty set is one which does not contain any element
denoted by ¢.
Probability

Overview of set theory

1.jpg

3-8-2. Event
Def. non-empty subset A of S., wlhich is a disjoint union of single element
Subsets of the sample space S of a randomexperinent E is calleaan Co
Probability

Axiomatic Definition of probability


Probability

A few theorems on probability

P(φ) = 0
P(Ac ) = 1 − P(A)
For any 2 events A and B we have
P(Ac ∩ B) = P(B) − P(A ∩ B)
P(B c ∩ A) = P(A) − P(A ∩ B)
Addition theorem of Probability

P(A ∪ B) = P(A) + P(B) − P(A ∩ B) (2)


Probability

Conditonal Probability

3-10. CONDITIONAL PROBABILITY


As discussed earlier, the
probability P (A) of an event A represents the likelihood
thata random
experiment will result in an outcome in the set A relative to the sample
space S of the random experiment. However, quite often, while
evaluating some event
probability, we already have some information stemming from the experiment.
example, if we have prior information that the outcome of the random For
must be in a set B of S, then this experiment
information
likelihood that the outcome will also be in B. This
must be used to
re-appraise th1e
by P(ATB) and is read as the conditional re-appraised probability is denoted
event B has already
probability of the event A,
given that the
happened.
We give below some illustrations to explain this
concept.
Hlustrations 1. Let us consider a random experiment ot
pack of cards. Then the drawing a card froma
probability of happening ot the evernt A: "The card drawn is a
king", is given by
PA)13
Now suppose that a card is drawn ana we are informed that the
drawn card is
red. How does this information affect the likelihood of the event A ?
Obviously, if the : "The card drawn is red', has happened, the
event B
card' is not possible. Hence the prbability ot the event A must be com event.Black
to the new sample space 'B which consists of 26 sample points (red ed relative
n (B) = 26. Among these 26 red cards, there are two (red) kings so that n (A ), =?
i.t,
Hence, the required probability is given by:
P(AIB) =n(AB)-
n (B)
Probability

Multiplication Theorem of Probability

3.11. MULTIPLICATION THEOREM OF PROBABILITY


Theorem 3.9. For two events A and B,
P(AnB) = P(A).P (B 1A), P(A) >0
... (3.17)
= P(B).P(A IB), P (B) >0
where P (B IA) represents conditional probability of occturrence of B when the event A has
alreadyhappenedand P (A IB) is the conditional probability of happening of A,. grven that
B has already happened.
Probability

Independence of events

3-12. INDEPENDENT EVENTS


Or

any oth
wo
2. It is obvious
die 1 and die
one throwing two dice, say
Onsider the experiment of
or the die 1 has nothing
to d o With a
of
the occurrence of a certain number of dots on othe, So to say. But
lat independent of each
Smlar event more
for the die 2. The two are quite
them, before being thrown. The
suPpose, dice were connected with a piece of thread
the two
in as much as that the
SIruation changes. This tinme the two events are not independent
events of the
does a particular face
ppermost face of one die will have something to do in causing is this influence or
other die to be uppermost; and the shorter the thread, the more
are not,
dependence. in
said the
Similarly, we draw two cards from a pack of cards in succession, then
if
results of the two draws are independent
any
to if the cards are drawn with replacement (1.e.,
if the first card drawn is placed backbein the pack before drawing the second card) and
way,
the results of the two draws are not independent if the cards are drawn without
replacement.
affect
indeendent

Definition. An event A is said to be independent (or statistically independent) of


another event B, if the conditional probability of Athe
given B, i.e., P(A IB) is cgual to tle
unconditional probability of B, ie., if P(A IB) =if P(A).
. . (3.19)
the
happening

happening
O

Oulc*
Probability

Multiplication theorem for independent events

33.MULTIPLICATION THEOREM OF PROBABILITY FOR INDEPENDENT EVENTS

Theorem 3.11. If A and B are two events with positive probabilities {P(A) # 0,

P(B) +01, then A and B are independentifand only if P(AnB) =


P(A). P (B)... (3-21)
Probability

sample space

Definition 2. Thesample space of a statistical experiment is a pair (2, S), where


(a 2 is the set of all possible outcomes of the experiment.
b) S is aa-field of subsets of 2.

The elements of 2 are called sample points.


Any set A E S is known as an
event. Clearly, A is a collection of sample points. We say that an event A happens
if the outcome of the experiment corresponds to a point in A. Each one-point set is
known as a simple or elementary event. If the set 2 contains only a finite number of
points, we say that (2, S) is afinite sample space. If 2 contains at most a countable
number of points, we call ($2, S) a discrete sample space. If, however, S2 contains
uncountably many points, we say that (2,S) is an uncountable sample space. In
Probability

Let (S2, S) be the sample space associated with a statistical experiment. In this seC
tion we define a
probability set function and study some of its properties.
Definition 1. Let (2, S) be a sanmple space. A set function P defined on S is
called a probability measure (or simply, probability) if it satisfies the
following con-
ditions:
i) P(A) 20 for allA e S.

ii) P(2) = 1.
(iii) Let {Aj}), Aj ¬ S, j = 1,2,... , be a disjoint sequence of sets; that is,
Aj Ak = I for j #k, where Dis the null set. Then

(1) P PA)
j=l

where we have used the notation A; to denote union of disjoint sets


Aj.
Probability

In Chapter 1 we studied properties of a set function P defined on a sample space


(2, S). Since P is a set function, it is not very easy to handle; we cannot perfom
arithmetic or algebraic operations on sets. Moreover, in practice one frequently ob-
serves some function of elementary events. When a coin is tossed repeatedly, which

replication resulted in heads is not of much interest. Rather, one is interested in the
number of heads, and consequently, the number of tails, that appear in, say, n tossings
of the coin. It is therefore desirable to introduce a point function on the sample space.
We can then use our knowledge of calculus or real ánalysis to study properties of P

40
Probability

Random Variable

Definition 1. Let (2, S) be a sample space. A


finite, single-valued function that
maps $2 into R is called a random variable if the inverse
(RV) images under X of all1
Borel sets in R are events, that is, if
he
(1) X(B) =
{w: X(») ¬ B} ¬ S for all B ¬ B.
Probability

Random Variable

5-1. INTRODUCTIONN
In the previous chapters, have discussed that the
we
concept of the rand..
experiment leads to the notation of
a sample
space. The assignment and comnutat.
tion
of probabilities of events were studied in detail. In many experiments, w
are
interested not in knowing which of the outcomes has occurred, but in the number
associated with them. For example, when n coins are tossed, one may be interested in
knowing the number of heads obtained. When a pair of dice are tossed, one may senk
information about the sum
of points. Thus, we associate a real number with each
outcome of an experiment. In other words,(we are considering a function whose
domain is the set of possible outcomes, whose range is a subset of the set of
and reals
Such a function is called a random variable.
Intuitively by a random variable (r.o.) we mean a real number X connected with te
outcome of a random experiment E. For example, if E consists of two tosses the
random variable which is the number of heads (0, 1 or 2).
Outcome HH HT TH TT
Value of X 2 1 1
te
Thus to each outcome w, there corresponds a real number X(w). Sinee
points of the sample space S correspond to outcomes, this means that a real numer
which we denote by X (0), is defined for each we S. From this standpoint, we delre
random variable to be a real function on S as follows:
Probability

Properties

Some theorems on Random Variables. Here we shall state (without proof)


some of the fundamental results and theorems on random variables.
Consider the probability space (S, B, P).
1. A function X(0) from S to R (-o, o) is a r.v. if and only if for reala,
o: X (o) <a) e B.
2.If X1 and X2 are random variables and C is a constant then CX1, X1 + X2, X1 X2
are also random variables.
Remark. It will follow that C,X + C,X, is a r.v. for constants C, and C2. In particular,
-X is a r.o.

3.If X is a random variable then

where(o)= if X(o) =0, (ii) X, (0)) =


max {0, X (o)},
(i) X- (o) = -

{0, X (»)}, and


min (iv) I X , are random variables
4. If X and X2 are random variables, then (i) max [X1, X2], and (ii) min [X, XIan
also random variables.
5. If X is andf() is a continuous function, then flX) is a r.o.
a r.o.

6. If X is a r.0. andf() is an
increasing function, thenf (X) is a r.a.
Corollary. Iff is a function of bounded variations on every finite interval (a, b], and Xs
prop1.pdf r.o. then f{X) is a r.v.
Probability

Distribution Function

TV U UUU T o S U TCUTCS uTe veyUrU wieSCOpe ofthiS bOOk.)

5-2. DISTRIBUTION FUNCTION

Definition. Let X be a random variable. The function F defined for all real x by
F (x) P (X Sx) P(o: X (w) Sx}, - o <x< o,
= =
.. (5:1)
. .

is called the distribution function (d.f.) of the r.v. (X).


Remark. A distribution function is also called the cumulative distribution function.
Sometimes, the notation Fx(x) is used to emphasise the fact that the distribution function is
associated with the particular random variable X. Clearly, the domain of the distributio
functionis (-o, o) and its range is [0, 1].
11mh
Probability

Properties

5-2-1. Properties of Distribution Function. We now proceed to derive an


of properties common to all distribution functions.
1.If F is the d.f. of the r.u. X and ifa < b, then P (a < Xsb) = F (b) - F(a).
1 1/V

2.1fF is df. of one-dimensional r.o. X, then (i) 0 <F(x) $1, (ii) F(x)<F(y) ifx < y.
In other words, all distribution functions
between 0 and 1.
are monotonically non-decreasing and lie
3.If F is d.f. of one-dimensional r.v. X, then
F(-) lim F(x) =0 and
=
F (o) =
lim F (x) =
1
X - oo
Probability

Discrete Random variable

A random variable which can take only a countable no of real values. In other words a
real valued function defined on a discrete sample space. Eg. no. of students.

5.3-1. Probability Mass Function. If X is a one-dimensional discrete random

ariable taking at most a infinite number of values X1, X2. then its
countably
probabilistic behaviour at each real point is described by a function called the
probability mass function (or discrete density function) which is defined below
Definition. 1fX is a discrete random variable with distinct values x, X2., Xne thet
the function p(x) defined as:
SP (X x) = =
Pi, ifx = ;
Pxx) ifx#*;; i =
1, 2, ...

function of r.o. X.
1s called the probability mass

Pinf
This is defined such that p(xi ≥ 0 ∀i) and i=1 p(xi ) = 1.
Probability

Distribution function of a discrete random variable

In case there is a rv X taking a countable no of points x1 , x2 , ..., xnPand their


associated probabilities p1 , p2 , ..., pn , such that p(xi ≥ 0 ∀i) and infi=1 p(xi ) = 1,
then the cumulative distribution function of the rv X is given by
X
F (x) = pi (3)
i:xi ≤x

Note that F (x) is a step function with discrete jump pi at point i and being constant
between each pair of integers.
Probability

Fx

1 2 3 4 5 6 7
Probability

These values are summarized in the following probability table:

X: 2 3 4 5 6 7 8 9 10 11 12

P(): 1 236 3 4 5 5 4 3
36 36 36 36 36 36 36 36 36

The chart of the probability distribution is given below:


F(x)
6/36 6/36

5/36 5/36

4/36 VI 4/36

3/36 3/36

2/36 2/36
1/36 1/36

O
O 1 2 3 4 5 6 7 8 9 10 11 12 x 12 3 4 5 6 7 8 9 10 11 12 X

Function or
Fig. 5-1. (a) Probability Function of X Fig. 5-1 (b) : Distribution
Probability

Continuous random Variable

A rv X is said to be continuous if it takes all possible values within between certain


limits. Or n other words, the different values of a rv cannot be put in a 1-1
correpondence with a set of positive integers.
Probability

5-4-1. Probability Density Function


interval
(Concept and Definition). Consider the sit
(x, x + dx) of length dx round the point x. Let f(x) be any continuous fu
of x so that f(x)dx represents the probability that X falls in the infinitesinmal nter
(x, x +dx). Symbolically,
... (55)

P(xSX
RANDOM VARIABLES AND DISTRIBUTION FUNCTIONS 5-11

dx
Sx+
In the figure, f (x) represents the area
bounded by dx) and
the curve y =f(x), x-axis
x)dx
ordinates at the points x and x + dx. The

function fx(x ) so defined is known as


=fx(x) f(x)
probability density function or sinply density
function of random variable X and is usually
abbreviated as p.d.f. The expression, f(x)dx,
dx x-d d
Fig. 5-7
usually written as dF(x), is known as the
probability differential and the curve y =f(x) is known as the probability density curve or

simply probability curve.

Definition. p.d.f. fx(x) of the r.v. X is defined as

P(x SX Sx + &x)
Sxlx)=lim ..5-5 a)
Sx >0

hence the
The probability for a variate value to lie in the interval dx is f(x) dx and
the finite interval [a, B] is:
probability for a variate value to fall in
P
... (5.5 b)
P (as X S B) =
|x) dx,
between the curve y =J(X), x-axis and the ordinates a t r=a and
which represents area

is unity, we have fx) dx =1, where la, b] is the


X=
B. Further, since total probability a
"The range of the variable may be finite or infinite.
range of the random variable X.
Probability

Properties of PDF

) fx)20, (i) fx) dx =1

ii) The probability P(E) given by: P(E)=Sx)dr


E
Probability

of a r.o. X, where X is defined from a to .


b.
ASTC
Let Sx(x) or f{x) be the p.d.f. Then
dx
) Arithnetic Mean |x fx) =

(i) Harmonic Mean. Harmonic mean His givenby - r)d


(ii) Geometric Mean. Geometric mean G is given by : loE G = l o g x.f(x) dr

56
(iv) (about origin) =
xf) dx . (57
b
(about the point x =A) = | (x-Ay.fx) dx
(57 a
and
4, (about mean) (x-mean).fxhdx (57 b
In particular, from (5-6) and (5.7), we have
H1 (about origin) =
Mean =
xfx) dx and

Hence H--42 =

f) dx-((f) dr) ... (57c)


From (5 7), on
putting r =3 and 4
consequently the moments about meanrespectively, get the values of ua' and Hs and
we

can be obtained the by using relations


and Ha 43-342 H+24u
. (5-7d)
and hence Pj and B2 can be computed.
44-4us'4 u12-3u) + 6142
() Median. Median is the point which
parts. In case of continuous distribution, divides the entire distribution in two e la

median is
area into two equal
parts. Thus if M is the median, thethe point which divides the to
M b

) dx=f)
JM
ds =} (58)

M
Thus solving b
fcx) dx= or
fx) dx= ... (58a)
M
for M, we get the value of median.
(vi) Mean Deviation. Mean deviation
about the mean
' is given by :

M.D. =
In ceneral, mean diviation
x-mean|fx) dx ... (5.9)
about an
average 'A' is
given by:
M.D. about 'A'
. (5-90)

You might also like