KEMBAR78
Module V Large Sample (Z-Test) Part 1 Concepts | PDF | Statistical Significance | Statistics
0% found this document useful (0 votes)
12 views16 pages

Module V Large Sample (Z-Test) Part 1 Concepts

Module 5 of the MAT2001 Statistics for Engineers course covers hypothesis testing, focusing on large sample tests using the Z-test. It explains key concepts such as population, sample, hypothesis types, errors, critical regions, and the procedure for hypothesis testing. The module also distinguishes between parametric and non-parametric tests and provides examples of two-tailed and one-tailed tests along with their critical values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views16 pages

Module V Large Sample (Z-Test) Part 1 Concepts

Module 5 of the MAT2001 Statistics for Engineers course covers hypothesis testing, focusing on large sample tests using the Z-test. It explains key concepts such as population, sample, hypothesis types, errors, critical regions, and the procedure for hypothesis testing. The module also distinguishes between parametric and non-parametric tests and provides examples of two-tailed and one-tailed tests along with their critical values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Module 5 – Hypothesis Testing – I | NA

MAT2001 Statistics for Engineers

Syllabus: Testing of hypothesis – Introduction-Types of errors, critical region, procedure of


testing hypothesis- large sample tests- Z test for Single Proportion, Difference of Proportion,
mean and difference of means.

Hypothesis Testing – I

1. Large Sample Tests (Z-Test)

Preliminary Definitions

Population: A large group of individuals under study is called a population.

Sample: A finite subset of a population is called a sample.


Example: Sampling is quite often used in our day-to-day practical life. For instance, in a shop
we assess the quality of rice, wheat or any other commodity by taking a handful of it from the
bag and then to decide to purchase it or not.

Random sample: A random sample is one in which each member of the population has an
equal chance of being included in it.

Size of the sample: The number of members in a sample is called the size of the sample.

Parameter: Any statistical measurements or constants of a population are called the


parameters of the population. We generally denote the mean and variance of a population by 𝜇
and 𝜎 2 , correlation coefficient (ρ) and proportion (P) are the parameter of a distribution.
Parameters are functions of the population values. In general, population parameters are
unknown.

Statistic: Any statistical measurement computed from the samples corresponding to the
parameters namely mean (𝑥̅ ), variance (𝑠 2 ), sample correlation coefficient (𝑟) and proportion
(𝑝) etc, is called a statistic. In other words, a statistic is a function of the sample members
(observations). In general, sample statistics are used as their estimates.

Example: Let (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) be a random sample from a population.


∑𝑥
The sample mean is 𝑋̅ = 𝑛
∑ 𝑥2 ∑𝑥 2
The sample variance is 𝑆 2 = − ( 𝑛 ) ; 𝑋̅ and 𝑆 2 are statistics.
𝑛

Sampling distribution: The distribution followed by a statistic is called the sampling


distribution of the statistic.

Standard error (S.E.): The standard deviation (SD) of the sampling distribution of a
statistic (e.g. mean, variance, correlation coefficient, skewness etc.) is called the standard error
of the statistic. For example, the standard deviation of the sampling distribution of the mean 𝑥̅
known as the standard error of the mean,

1
Module 5 – Hypothesis Testing – I | NA

The standard errors of the some of the well-known statistic for large samples are given below,
where 𝑛 is the sample size, 𝜎 2 is the population variance and 𝑃 is the population proportion
and 𝑄 = 1 − 𝑃. 𝑛1 and 𝑛2 represent the sizes of two independent random samples
respectively.

Statistic Standard Error (S.E)


Sample mean 𝑥̅ 𝜎
√𝑛
Observed sample proportion 𝑝
𝑃𝑄

𝑛
Difference between two samples means (𝑥̅1 − 𝑥̅2 )
𝜎12 𝜎22
√ +
𝑛1 𝑛2
Difference between two sample proportions (𝑝1 − 𝑝2)
𝑃1 𝑄1 𝑃2 𝑄2
√ +
𝑛1 𝑛2

Uses of (S.E.):
(i) The standard error plays a very important role in the theory of testing of hypothesis
(for large samples).
𝑡−𝐸(𝑡)
(ii) If 𝑡 is any statistic then, for large values of the sample is 𝑍 = 𝑆.𝐸.(𝑡) ~𝑁(0,1)
(iii) The standard error helps us to find the probable limits between which a parameter
may lie.

2
Module 5 – Hypothesis Testing – I | NA

Test of Hypothesis

Introduction: The method of hypothesis testing uses tests of significance to determine the
likelihood that a statement (often related to the mean or variance of a given distribution) is true,
and at what likelihood we would, as statisticians, accept the statement as true.
A hypothesis should be specific, clear and precise. It should state as far as possible in
mostly single terms so that the same is easily understood by all. It should state the relationship
between variables.
While understanding the mathematical concepts that go into the formulation of these
tests is important, knowledge of how to appropriately use each test (and when to use which
test) is equally important.

Types of Hypotheses

Statistical hypothesis: Any statement about the probability distribution of a population is


called statistical hypothesis. (OR)
A Statistical hypothesis is a conjecture about a population parameter. This conjecture
may or may not be true.

Simple hypothesis: If a hypothesis describes a distribution completely it is called a simple


hypothesis.

Composite hypothesis: A statistical hypothesis which doesn’t describes a distribution


completely is called a composite hypothesis.

Example:
(i) The distribution is normal.
(ii) The distribution is normal with mean 100.
(iii) The distribution is normal with mean 100 and SD 3.5.
Note: (i) and (ii) are composite hypothesis and (iii) is a simple hypothesis.

Null hypothesis: A statistical hypothesis is put to test under the assumption that it is true is
called null hypothesis and it is denoted as 𝐻0 . (OR)
“Null hypothesis is the hypothesis” which is tested for possible rejection under the assumption
that it is true by Prof. R.A. Fisher. (OR)
The null hypothesis is a statistical hypothesis that states that there is no difference
between a parameter and a specific value or that there is no difference between two parameters.

Alternative hypothesis: Any statistical hypothesis which is complementary to the null


hypothesis is called an alternative hypothesis and is denoted as 𝐻1 . (OR)
The alternative hypothesis is a statistical hypothesis that states a specific difference
between a parameter and a specific value or states that there is a difference between two
parameters. In other words, we can say 𝐻1 is complementary to 𝐻0 .

Types of tests

Example 1: Suppose we wish to test the null hypothesis 𝐻0 : 𝜇 = 65.


The alternative hypothesis can be anyone of the following:

3
Module 5 – Hypothesis Testing – I | NA

i) 𝐻1 : 𝜇 ≠ 65 (two tailed test)


ii) 𝐻1 : 𝜇 < 65 (left tailed test)
iii) 𝐻1 : 𝜇 > 65 (right tailed test)

Examples 2:

Two-Tailed Test: A medical researcher is interested in finding out whether a new medication
will have any undesirable side effects. The researcher is particularly concerned with the pulse
rate of the patients who take the medication.
What are the hypotheses to test whether the pulse rate will be different from the mean pulse
rate of 82 beats per minute?
𝐻0 : 𝜇 = 82 and 𝐻1 : 𝜇 ≠ 82, This is a Two-Tailed test.

Right-Tailed Test: A chemist invents an additive to increase the life of an automobile battery.
If the mean lifetime of the battery is 36 months, then his hypotheses are
𝐻0 : 𝜇 = 36 and 𝐻1 : 𝜇 > 36.

Left-Tailed Test: A contractor wishes to lower heating bills by using a special type of
insulation in houses. If the average of the monthly heating bills is Rs.78, her hypotheses about
heating costs will be 𝐻0 : 𝜇 = 𝑅𝑠. 78 and 𝐻1 : 𝜇 < 𝑅𝑠. 78.

Types of errors

In testing of a hypothesis, we come across 2 types of errors known as type-I error and type-II
error.
𝐻0 (True) 𝐻0 (False)
(Difference does NOT (Difference DOES
exist in population) exist in population)
Reject 𝐻0 (Study reports IS a difference) Type-I error
Do not reject 𝐻0 (Study reports NO difference) Type-II error

Type – I error: Rejecting 𝐻0 when 𝐻0 is true is called type I error (i.e., Do not rejecting 𝐻1
when 𝐻1 is false).

Type – II error: Do not rejecting 𝐻0 when 𝐻0 is false is called type II error (i.e., Rejecting
𝐻1 when 𝐻1 is true).

Critical Region: Let (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) be a random sample from a population with probability
density function 𝑓(𝑥). The set of all possible values of (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) constitutes an 𝑛
dimensional sample space Ω. In testing a hypothesis, we divide the sample space Ω into 2
regions 𝜔 and 𝜔 ̅. If the sample point lies in the region 𝜔, we reject 𝐻0 . This region is called
the critical region. i.e., the critical region is the region in which the null hypothesis is rejected.
The critical region is also called the region of rejection. 𝜔 ̅ is called the region of acceptance.

Size of type I error: The size of the type I error is denoted by 𝛼 and is defined as the
probability of the type I error.
i.e., 𝛼 = 𝑃[Type I error] = 𝑃[Rejecting 𝐻0 when 𝐻0 is true] = 𝑃[𝑋 ∈ 𝜔|𝐻0 is true]
𝛼 is also called the level of significance or the size of the critical region or the size of the test.

4
Module 5 – Hypothesis Testing – I | NA

Size of type II error: The size of the type II error is denoted as 𝛽 and is defined as the
probability of the type II error.
i.e., 𝛽 = 𝑃[Type II error] = 𝑃[Do not rejecting 𝐻0 when 𝐻1 is true] = 𝑃[𝑋 ∈ 𝜔
̅|𝐻1 is true]

Power of the test: If 𝛽 is the size of the type II error then 1 − 𝛽 is called the power of the
test or power function.

Best critical region: Among a set of critical regions having the same size 𝛼, that critical
region which has the minimum type II error is called the best critical region.

Unbiased critical region: In testing a hypothesis, if the size of the type II error is less than
the size of the type I error, then that critical region is called an unbiased critical region and any
test based on this critical region is called an unbiased test.

Important Tests of Hypothesis


For the purpose of testing a hypothesis, several tests of hypothesis were developed. They can
be classified as
➢ Parametric test
➢ Non-parametric test
Parametric tests are also known as the standard distribution free tests of hypothesis. The
important parametric tests are
➢ Z-test (for Large Samples)
➢ t-test (for Small Samples)
➢ F-test (for Small Samples)

Z-test: If the sample size 𝑛 is greater than or equal to 30 (𝑛 > 30), the sample is called a Large
Sample. The z-test is a statistical test for the mean of a population. It can be used for large
sample or when the population is normally distributed and 𝜎 is known.

Test of significance: The tests of significance enable us to decide, on the basis of a sample,
if the difference between an observe sample statistics and the parameter value is significant or
it might be attributed to the fluctuations in sampling.

Level of significance (LOS’s): The level at which we are prepare to reject a null
hypothesis when it is true is called the level of significance. The levels of significance generally
used are 5% and 1%.

Critical Region: The critical region is the region of the standard normal curve corresponding
to a predetermined level of significance. The region under the normal curve which is not shaded
is known as the acceptance region.

Two-Tailed Test: In testing a hypothesis if the critical region appears in both the tails of the
normal distribution, then the test is called a two-tailed test. Suppose we wish to test the null
hypothesis
𝐻0 : 𝜇 = 𝜇0
Against the alternative hypothesis
𝐻1 : 𝜇 ≠ 𝜇0

5
Module 5 – Hypothesis Testing – I | NA

Then we use a two-tailed test in which the critical region appeared in both tails of the
distribution.

One-Tailed Test: In testing a hypothesis if the critical region lies in either tail of the normal
distribution, then the test is called a one-tailed test. Suppose we wish to test the null hypothesis
𝐻0 : 𝜇 = 𝜇0
Against the alternative hypothesis
𝐻1 : 𝜇 < 𝜇0

Then we use a left-tailed test in the left tail of the distribution.

Suppose we wish to test the null hypothesis

𝐻0 : 𝜇 = 𝜇0
Against the alternative hypothesis
𝐻1 : 𝜇 > 𝜇0

Then we use a left-tailed test in the left tail of the distribution.

6
Module 5 – Hypothesis Testing – I | NA

Critical value: The critical value is the value which separates the critical region and the
region of acceptance. The critical values for some standard LOS's are given in the following
table:

Level of Significance (𝜶) / Critical Values (𝒁𝜶 )


Types of Tests 𝛼 = 1% (0.01) 𝛼 = 2% (0.02) 𝛼 = 5% (0.05) 𝛼 = 10% (0.1)
Two-Tailed |𝒁𝜶 | = 𝟐. 𝟓𝟖 |𝒁𝜶 | = 𝟐. 𝟑𝟑 |𝒁𝜶 | = 𝟏. 𝟗𝟔 |𝒁𝜶 | = 𝟏. 𝟔𝟒𝟓
Right-Tailed 𝒁𝜶 = 𝟐. 𝟑𝟑 𝒁𝜶 = 𝟐. 𝟎𝟓𝟓 𝒁𝜶 = 𝟏. 𝟔𝟒𝟓 𝒁𝜶 = 𝟏. 𝟐𝟖
Left-Tailed 𝒁𝜶 = −𝟐. 𝟑𝟑 𝒁𝜶 = −𝟐. 𝟎𝟓𝟓 𝒁𝜶 = −𝟏. 𝟔𝟒𝟓 𝒁𝜶 = −𝟏. 𝟐𝟖

Note: Since almost all distributions tend to the normal distribution for large values of 𝑛, we
use the normal test for large samples and for small samples, we use the tests based on Chi-
square 𝜒, 𝑡 and 𝐹-distributions.

Procedure for Hypothesis Testing (For large samples/Asymptotic test)

The main question in hypothesis testing is whether to accept the null hypothesis or not to accept
the null hypothesis. The following tests are involved in the test of significance for large samples
(𝑛 > 30).

Step 1: State hypotheses: Formulate the null hypothesis 𝐻0 and the alternative hypothesis 𝐻1 .
Step 2: State alternative hypothesis: Decide the nature of test (one-tailed or two-tailed based
on 𝐻1 )
𝑡−𝐸(𝑡)
Step 3: Test statistics: Under 𝐻0 , compute the test statistics 𝑧 = 𝑆.𝐸.(𝑡) ~𝑁(0,1), where
𝜎
𝑆. 𝐸. (𝑡) = and for the large samples, when population 𝑛’s standard deviation is known, the
√𝑛
𝑥̅ −𝜇
test statistics is 𝑧 = 𝜎 , the corresponding distribution is normal.
( )
√𝑛
Step 4: Level of significance: Choose the level of significance 𝛼 (generally 5% or 1%).
Step 5: Critical value: From the normal tables, obtain the 𝑧𝛼 (critical value) value which
depends upon 𝛼 (LOS) value and the nature of test (𝐻1 ).
Step 6: Comparison (Decision) and Conclusion:

7
Module 5 – Hypothesis Testing – I | NA

➢ If |𝑧| ≤ 𝑧𝛼 , 𝐻0 is not rejected (or) 𝐻1 is rejected, i.e., there is no significant difference


between the sample value and the parameter value at 𝛼% LOS. Where, |𝑧| and |𝑧𝛼 | are
calculated and table values respectively.
➢ If |𝑧| > 𝑧𝛼 , 𝐻0 is rejected (or) 𝐻1 is not rejected, i.e., there is significant difference at
𝛼% LOS.

Test of Significance: Large Samples (Z-test)


➢ Test of significance for single mean
➢ Test of significance for difference of means of two large samples
➢ Test of significance for a single proportion
➢ Test of significance for difference of proportions

I. Test of significance for single mean

Let (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) be a random sample of size 𝑛, drawn from a large population with unknown
mean 𝜇 and known variance 𝜎 2 .

Let 𝑥̅ denote the mean of the sample and 𝑠 2 denote the variance of the sample.

𝜎2 𝑥̅ −𝜇
We know that, 𝑥̅ ~𝑁 (𝜇, ). The standard normal variate corresponding to 𝑥̅ is 𝑧 = 𝑆.𝐸.(𝑥̅ ),
𝑛
𝜎
where 𝑆. 𝐸. (𝑥̅ ) =
√𝑛

Step 1: We set up the null hypothesis that there is no difference between the sample mean and
the population mean, 𝐻0 : 𝜇 = 𝜇0 . Set up the alternative hypothesis as anyone of the following
𝐻1 : 𝜇 ≠ 𝜇0
𝐻1 : 𝜇 < 𝜇0
𝐻1 : 𝜇 > 𝜇0
Step 2: The test statistic under 𝐻0 is
𝑥̅ − 𝜇
𝑧= ~𝑁(0,1)
𝜎
( )
√𝑛

Step 3: Level of significance: Choose the level of significance 𝛼.

Step 4: Critical value: Obtain the critical value 𝑧𝛼 from normal tables.

Step 5: Decision & Conclusion:

➢ If |𝑧| ≤ 𝑧𝛼 , 𝐻0 is not rejected (or) 𝐻1 is rejected, i.e., there is no significant difference


between the sample mean and the population mean at 𝛼% LOS.
➢ If |𝑧| > 𝑧𝛼 , 𝐻0 is rejected (or) 𝐻1 is not rejected, i.e., there is significant difference
between the sample mean and the population mean at 𝛼% LOS.

8
Module 5 – Hypothesis Testing – I | NA

II. Test of significance for difference between means

Let 𝑥̅1 , 𝑥̅2 and 𝑠12 , 𝑠22 be the means and variances of 2 samples of sizes 𝑛1 , 𝑛2 , drawn from the
population with means 𝜇1 , 𝜇2 and variances 𝜎12 , 𝜎22 respectively.

Step 1: We set up the null hypothesis, 𝐻0 : 𝜇1 = 𝜇2 . Set up the alternative hypothesis as anyone
of the following
𝐻1 : 𝜇1 ≠ 𝜇2
𝐻1 : 𝜇1 < 𝜇2
𝐻1 : 𝜇1 > 𝜇2
Step 2: The test statistic under 𝐻0 is
𝑥̅1 − 𝑥̅2
𝑧= ~𝑁(0,1)
2 2
𝜎 𝜎
(√ 1 + 2 )
𝑛1 𝑛2

Step 3: Level of significance: Choose the level of significance 𝛼.

Step 4: Critical value: Obtain the critical value 𝑧𝛼 from normal tables.

Step 5: Decision & Conclusion:

➢ If |𝑧| ≤ 𝑧𝛼 , 𝐻0 is not rejected (or) 𝐻1 is rejected, i.e., there is no significant difference


between the means at 𝛼% LOS.
➢ If |𝑧| > 𝑧𝛼 , 𝐻0 is rejected (or) 𝐻1 is not rejected, i.e., there is significant difference
between the means at 𝛼% LOS.
➢ (or) we can say that the samples have been drawn from populations having the same
mean.

Note: For large samples we can replace the population variance by the sample variance if the
population variance is unknown.

III. Test of significance for single proportions

Let 𝑝 denote the proportion in a sample of size 𝑛 drawn from a population in which the
proportion is 𝑃.

Step 1: We set up the null hypothesis, 𝐻0 : 𝑃 = 𝑃0 . Set up the alternative hypothesis as anyone
of the following
𝐻1 : 𝑃 ≠ 𝑃0
𝐻1 : 𝑃 < 𝑃0
𝐻1 : 𝑃 > 𝑃0
Step 2: The test statistic under 𝐻0 is
𝑝−𝑃
𝑧= ~𝑁(0,1); 𝑄 = 1 − 𝑃
𝑃𝑄
(√ 𝑛 )

Step 3: Level of significance: Choose the level of significance 𝛼.

9
Module 5 – Hypothesis Testing – I | NA

Step 4: Critical value: Obtain the critical value 𝑧𝛼 from normal tables.

Step 5: Decision & Conclusion:

➢ If |𝑧| ≤ 𝑧𝛼 , 𝐻0 is not rejected (or) 𝐻1 is rejected, i.e., there is no significant difference


between the sample proportion and the population proportion at 𝛼% LOS.
➢ If |𝑧| > 𝑧𝛼 , 𝐻0 is rejected (or) 𝐻1 is not rejected, i.e., there is significant difference
between the sample proportion and the population proportion at 𝛼% LOS.
➢ (or) we can say that the sample has been drawn from a population in which the
proportion is 𝑃0 .

IV. Test of significance for difference between proportions

Let 𝑝1, 𝑝2 be the proportions in 2 samples of size 𝑛1 & 𝑛2 drawn from a population with the
proportions is 𝑃1 & 𝑃2 .

Step 1: We set up the null hypothesis that there is no difference between the sample mean and
the population mean, 𝐻0 : 𝑃1 = 𝑃2 . Set up the alternative hypothesis as anyone of the following
𝐻1 : 𝑃1 ≠ 𝑃2
𝐻1 : 𝑃1 < 𝑃2
𝐻1 : 𝑃1 > 𝑃2
Step 2: The test statistic under 𝐻0 is
𝑝1 −𝑝2 𝑛 𝑝 +𝑛 𝑝
𝑧= 1 1
~𝑁(0,1) where 𝑃̂ = 1𝑛1 +𝑛2 2 , 𝑄̂ = 1 − 𝑃̂
(√𝑃̂ 𝑄̂ ( + )) 1 2
𝑛1 𝑛2

Step 3: Level of significance: Choose the level of significance 𝛼.

Step 4: Critical value: Obtain the critical value 𝑧𝛼 from normal tables.

Step 5: Decision & Conclusion:

➢ If |𝑧| ≤ 𝑧𝛼 , 𝐻0 is not rejected (or) 𝐻1 is rejected, i.e., there is no significant difference


between the proportions at 𝛼% LOS.
➢ If |𝑧| > 𝑧𝛼 , 𝐻0 is rejected (or) 𝐻1 is not rejected, i.e., there is significant difference
between the proportions at 𝛼% LOS.

Method of finding the confidence interval: Let (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) be a random sample


of size 𝑛, drawn from a population whose density function is 𝑓(𝑥, 𝜃).
Let 𝑧 be a variable depending on the sample observations and the unknown parameter 𝜃. With
the help of distribution of 𝑧, we can find the numbers 𝑧1 , 𝑧2 such that 𝑃(𝑧1 < 𝑧 < 𝑧2 ) = 1 − 𝛼.
Substituting for 𝑧 this can be written in the form 𝑃(𝑐1 < 𝜃 < 𝑐2 ) = 1 − 𝛼. (𝑐1 , 𝑐2) is the
(1 − 𝛼) × 100% confidence interval for 𝜃.

10
Module 5 – Hypothesis Testing – I | NA

Confidence intervals for large samples (sample size 𝒏 ≥ 𝟑𝟎):

I. To find the (𝟏 − 𝜶) × 𝟏𝟎𝟎% confidence interval for simple mean:

Let (𝑥1 , 𝑥2 , … , 𝑥𝑛 ) be a random sample of size 𝑛, drawn from a large population with unknown
mean 𝜇 and known variance 𝜎 2 .

II. To find the (𝟏 − 𝜶) × 𝟏𝟎𝟎% confidence interval for differences of mean:

Let 𝑥̅1 , 𝑥̅2 and 𝑠12 , 𝑠22 be the means and variances of 2 samples of sizes 𝑛1 , 𝑛2 , drawn from the
population with means 𝜇1 , 𝜇2 and variances 𝜎12 , 𝜎22 respectively.

11
Module 5 – Hypothesis Testing – I | NA

12
Module 5 – Hypothesis Testing – I | NA

13
Module 5 – Hypothesis Testing – I | NA

***************************************************************************

Example 1.1 A random sample of 60 observations had a mean of 140. Can this sample be
regarded as drawn from population with mean 148 and standard deviation 3.5?

Solution:

14
Module 5 – Hypothesis Testing – I | NA

Example 1.2 A random sample of 100 students taken from college had a mean height of 152
cms with a SD of 2.1 cms. Test at 5% level of significance whether the average height of the
students of college is more than 160 cms.

Solutions:

In this case, we want to test whether the average height of college students is more than 160
cms. This indicates a one-tailed (specifically, right-tailed) test.
• Null Hypothesis (H₀): The population mean height is less than or equal to 160 cms (μ
≤ 160)
• Alternative Hypothesis (H₁): The population mean height is more than 160 cms (μ >
160)
However, for hypothesis testing, we typically set up the null hypothesis as an equality:
• Null Hypothesis (H₀): μ = 160 cms

15
Module 5 – Hypothesis Testing – I | NA

• Alternative Hypothesis (H₁): μ > 160 cms


This approach is preferred because it allows us to calculate a specific probability under the
assumption that the population mean is exactly 160 cms.

16

You might also like