KEMBAR78
Research Methodology - Comprehensive Study Notes | PDF | Sampling (Statistics) | Statistics
0% found this document useful (0 votes)
7 views8 pages

Research Methodology - Comprehensive Study Notes

The document outlines a comprehensive study on research methodology, covering key concepts such as defining research problems, designing research, sampling methods, measurement techniques, data collection, and data analysis. It emphasizes the importance of systematic inquiry, proper problem definition, and the selection of appropriate research designs and methods to ensure valid and reliable results. The document also highlights the distinction between primary and secondary data collection methods and the significance of processing and analyzing data to draw meaningful conclusions.

Uploaded by

chandrika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views8 pages

Research Methodology - Comprehensive Study Notes

The document outlines a comprehensive study on research methodology, covering key concepts such as defining research problems, designing research, sampling methods, measurement techniques, data collection, and data analysis. It emphasizes the importance of systematic inquiry, proper problem definition, and the selection of appropriate research designs and methods to ensure valid and reliable results. The document also highlights the distinction between primary and secondary data collection methods and the significance of processing and analyzing data to draw meaningful conclusions.

Uploaded by

chandrika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Research Methodology: Comprehensive Study

Notes
Chapter 1: Research Methodology – An Introduction
Research is “a systematic and scientific search for pertinent information on a specific topic” 1 . In common
terms it is a quest for knowledge; technically, Kothari defines research as an art of scientific investigation to
gain or verify knowledge 1 . According to scholars, research involves carefully defining a problem,
formulating hypotheses, collecting and analyzing data, and testing conclusions 2 3 . Research objectives
typically fall into broad categories: - Exploratory/Formulative: To gain familiarity or new insights about a
phenomenon 4 .
- Descriptive/Diagnostic: To describe characteristics or frequencies (often in surveys) 5 .
- Analytical/Causal: To test hypotheses or causal relationships between variables 6 .
- Applied vs. Fundamental: Applied (action) research seeks solutions to immediate problems, whereas
fundamental (basic) research builds theory or general knowledge 7 .
- Quantitative vs. Qualitative: Quantitative research deals with measurable quantities, while qualitative
research investigates qualities, motives or reasons (often via interviews or case studies) 8 .
- Conceptual vs. Empirical: Conceptual research relates to abstract theory, while empirical research relies
on observation or experiments 9 .

Research approaches broadly divide into quantitative (data expressed numerically, enabling statistical
analysis) and qualitative (subjective insights, using interviews or observation) 8 10 . The significance of
research lies in expanding knowledge, solving problems, informing decisions and advancing society. Good
research is objective, valid and reliable, following scientific methods and sound methodology 11 3 . In
summary, research is a careful, systematic inquiry by which we “move from the known to the unknown” to
discover new facts or truths 1 2 .

Chapter 2: Defining the Research Problem


A research problem is a clear, concise statement of the issue to be studied. Kothari notes that a problem
exists when a researcher faces two or more courses of action and is unsure which is best, aiming to achieve
certain outcomes 12 . In practice, defining the problem involves several steps: - General Statement: Write
a broad description of the problem in everyday language.
- Understand Its Nature: Clarify what the problem is about and its boundaries.
- Literature Survey: Examine existing research to refine understanding and avoid duplication.
- Operationalization: Identify key variables, factors and possible relationships.
- Rephrasing: Reformulate the problem as a precise research question or hypothesis.

Researchers must select a problem that is researchable, original, and of interest or importance 12 . Poorly
defined problems can lead to wasted resources or incorrect conclusions. Kothari emphasizes that properly
defining the problem is crucial before data collection begins. In practice, one should define the research
problem in measurable terms, justify its relevance, and refine it iteratively. A well-defined problem often

1
includes the main objective, population, variables, and expected outcomes. Finally, the problem statement
is converted into specific hypotheses or research questions to be tested or explored.

Chapter 3: Research Design


A research design is the blueprint or overall plan for conducting a study. It specifies the procedures for
collecting, measuring and analyzing data to ensure the study addresses its objectives 13 . In Kothari’s
words, a design is “a plan or blue-print of a study” covering what to study, how, where and with what
resources 13 14 . Good design ensures efficient use of time and resources and maximizes the validity and
reliability of findings 15 16 . Key features of an effective design include minimizing bias, ensuring
representativeness, and maximizing precision 16 .

Different research designs serve different purposes. The main types are: - Exploratory (Formulative)
Research: Used to gather preliminary information, define problems or develop hypotheses. It is flexible and
unstructured, employing methods like literature surveys, expert interviews (experience survey), and analysis
of examples 17 .
- Descriptive Research: Aimed at portraying characteristics of a group or phenomenon. It uses structured
methods (e.g. surveys) to describe “what is” (frequencies, averages, correlations). Also known as ex post
facto research when variables cannot be controlled 18 .
- Diagnostic Research: Similar to descriptive, but seeks causes or relationships behind observed patterns.
- Experimental (Causal) Research: Involves manipulation of one or more variables to examine effects on
others. It requires control (e.g. random assignment) to establish cause-and-effect 17 19 . Common
principles (from Fisher) include randomization, replication and control, to reduce bias.

In practice, a researcher often uses mixed approaches (e.g. exploratory pilot followed by a descriptive
survey). The chosen design must align with the research questions and practical constraints.

Chapter 4: Sampling Design


Census vs. Sample: A census examines every unit in the population (universe), while a sample examines a
subset 20 . Kothari notes that a complete census, though maximally accurate in theory, is often impractical
due to cost, time and potential hidden biases 20 21 . Sampling involves selecting a representative subset
(sample) to infer about the whole population. The sample design must ensure representativeness and
accuracy.

Sample Design: This is a plan specifying how to select the sample. Key steps include 22 23 :
- Define the Universe: Clearly specify the population of interest (finite vs. infinite).
- Choose Sampling Units: Decide what constitutes an individual unit (person, household, etc.).
- Develop the Sampling Frame: Compile a list of all units (e.g. registry, directory) from which to sample.
This frame should closely match the population 24 .
- Determine Sample Size: Balance precision with cost. Larger samples reduce sampling error but increase
cost. Consider the desired confidence level and margin of error 25 .
- Select Sampling Technique: Decide on a sampling procedure (probability vs non-probability) suitable for
the study.
- Specify Parameters of Interest: Identify what will be estimated (mean, proportion, etc.), as this affects

2
design.
- Budget and Constraints: Ensure the plan fits available resources and time.

Kothari emphasizes two costs in sampling: data collection cost and the cost of potential incorrect inferences
(bias or error) 26 . Good design aims to minimize both sampling error (random variation) and systematic
bias. Characteristics of a good sample design include 27 : - Truly representative sample.
- Small sampling error (for a given cost).
- Feasible within budget and resources.
- Control over systematic bias.
- Results generalizable to the population with known confidence 27 .

Sampling Methods: Samples can be classified by representation basis: - Probability Sampling: Each unit
has a known non-zero chance of selection. Common techniques are simple random sampling, systematic
sampling, stratified sampling, cluster sampling, multistage sampling etc. These allow for statistical
estimation of sampling error.
- Non-Probability Sampling: Selection without random mechanism, e.g. convenience sampling, purposive/
judgment sampling, quota sampling. Useful for exploratory or qualitative studies but not for generalizing
with known error bounds. Kothari notes that in purposive sampling “the researcher deliberately chooses
particular units” assuming they are typical 28 .

In practice, probability sampling (especially simple random or stratified) is preferred when precision is
needed 29 . Non-probability methods may be used when resources are limited or when representativeness
is less critical.

Chapter 5: Measurement and Scaling Techniques


Measurement is the process of assigning numbers to properties or attributes of objects according to rules
30 . In research, we measure both physical attributes (e.g. height, weight) and abstract concepts (e.g.

attitude, motivation). Kothari explains that mapping real-world phenomena to numbers requires careful
definition of measurement units and levels 30 . Measurement accuracy is high for direct, physical attributes
but more challenging for abstract constructs.

Data are recorded on different levels of measurement:


- Nominal: Categories without numeric value (e.g. gender coded 0/1, or yes/no). Numbers identify groups
only 31 .
- Ordinal: Ranks or ordered categories (e.g. small, medium, large). We can say “greater than” but not
quantify differences 32 .
- Interval: Ordered with equal differences but no true zero (e.g. temperature in Celsius). We can compute
differences (10°C – 5°C), but ratios are meaningless 33 .
- Ratio: Ordered with equal intervals and a meaningful zero (e.g. weight, age, income). All arithmetic
operations apply 34 .

Choosing the correct scale influences which statistical methods are appropriate (e.g. mean is meaningful
only for interval/ratio).

3
Scaling Techniques: These go beyond basic measurement, often to measure attitudes or preferences.
Kothari discusses several common scaling methods: - Rating Scales: Respondents evaluate items on a scale
(e.g. Likert scales or graphic scales) 35 . For instance, a five-point Likert-type scale might ask respondents to
rate agreement from “strongly agree” to “strongly disagree.” Such scales are easy to use and analyze but
rely on respondents’ subjective judgments 35 36 . Rating scales may be graphical (a continuum with a
mark) or itemized (choosing from descriptive statements) 35 37 . They are widely used in surveys of
attitudes, satisfaction, or opinions.
- Ranking Scales (Comparative): Items or alternatives are ranked relative to each other. For example, in a
paired-comparison ranking, a respondent chooses between two items at a time 38 . This forces preference
ordering but can be difficult if many items are compared.
- Semantic Differential, Constant Sum, Thurstone and Guttman Scales: (Not detailed here) are other
methods. Thurstone’s method uses expert judges to assign scale values, Guttman’s scalogram arranges
items cumulatively, and constant sum asks respondents to allocate points among options. Kothari mentions
these advanced scales but emphasizes that Likert-type summated scales (combining several items) are
common in research 39 .

Key Points in Measurement/Scaling: Valid and reliable scales are critical. Researchers must define
concepts clearly, pretest questionnaires, and ensure consistency in coding. Measurement error (e.g.
ambiguous questions, respondent bias) should be minimized. In summary, measurement converts
qualitative concepts into quantitative data through well-defined scales, enabling statistical analysis 30 31 .

Chapter 6: Methods of Data Collection


Data collection involves gathering information to address the research questions. Methods fall into two
categories: primary (collected first-hand for the study) and secondary (existing data).

• Primary Data Collection:


• Observation: The researcher directly observes and records behavior or events. Kothari notes that
observational data are objective and eliminate respondent bias 40 . For example, a consumer
researcher might observe shoppers’ product choices rather than asking them. Observation is
suitable when respondents cannot self-report (e.g. children, animals) 40 . Observations can be
structured (predefined coding of observed variables) or unstructured, and participant (observer
engages in the environment) or non-participant. Structured observation ensures consistency, while
unstructured allows open-ended note-taking 41 . Participant observation (researcher becomes part
of the group) can yield in-depth insights but risks loss of objectivity 42 43 . Observation’s limitations
include high cost, limited scope (only observable behavior), and potential hidden biases 44 .
• Interview: Collecting data by asking questions orally. Personal (face-to-face) interviews involve an
interviewer asking questions directly 45 . They allow probing and clarifying but are time-consuming
and expensive. Telephone interviews offer wider reach at lower cost. Interviews can be structured
(fixed questions) or unstructured (open conversation).
• Questionnaires/Surveys: Written sets of questions that respondents fill out. Often used in large-
scale research. Mailed or online questionnaires can reach many people cheaply. Kothari notes
advantages of mail surveys include convenience and scalability 46 . However, common drawbacks
are low response rates and non-response bias 47 . Questionnaires are usually structured (with
specific, fixed-choice questions), which makes analysis easier 48 . Unstructured questionnaires use
open-ended questions but are harder to analyze.

4
• Schedules: Similar to questionnaires, but administered by an interviewer who records answers. They
ensure higher response rates and clarity, since the interviewer can explain questions 49 . The
difference between a schedule and a questionnaire is mainly the presence of an interviewer:
schedules are used when literacy is low or where explanations are needed 49 .

• Other Methods: Depending on field, methods like experiments, focus groups, case studies (single-
case in-depth studies), and projective techniques (word association, sentence completion) can be
used 50 51 . For example, a case study involves a detailed examination of one unit (person,
organization) to explore causal factors in context 52 . Case studies yield rich, holistic understanding
but are not generalizable 52 53 .

• Secondary Data Collection: This uses already existing information (government reports, academic
publications, organizational records, archives) 54 . Kothari stresses care with secondary data – one
must check reliability, relevance and adequacy 55 56 . While secondary data save time and money,
they may not fit the research objectives exactly. Researchers should verify the data’s source,
collection methods and definitions before use 57 58 .

Choosing the Method: Kothari advises selecting methods based on the nature of the study, resource
constraints and required precision 59 60 . Key considerations include: study objectives, funds, time,
precision needed and respondent accessibility. For instance, a limited budget and basic information needs
may favor mail surveys; ample resources and high precision may favor personal interviews. Often, mixed
methods (e.g. survey plus follow-up interview) yield more reliable results 61 .

In all cases, the instrument (questionnaire/interview guide) must be carefully designed and pretested.
Appendix guidelines (not detailed here) help in constructing effective questionnaires and conducting
interviews (structured sequence, clear wording, piloting, etc.).

Chapter 7: Processing and Analysis of Data


Once data are collected, they undergo processing and then analysis to extract meaningful results.

• Data Processing Operations:


• Editing: Reviewing raw data (questionnaires, schedules) to detect and correct errors or omissions
62 . This may involve field editing (immediate checks by interviewer) and central editing (thorough

review by trained editor) to ensure data completeness and consistency 63 64 . Obvious mistakes
(e.g. inconsistent entries) are corrected or marked for follow-up.
• Coding: Converting responses into symbols or numeric codes so data can be tabulated. For
example, categorizing answers (e.g. “Male=1, Female=2”). Codes must be exhaustive and mutually
exclusive 65 . Coding schemes should be planned before fieldwork (precoded questionnaires) to
facilitate tabulation.
• Classification: Grouping coded data into categories or classes. Data are classified by common
characteristics (attributes like gender, qualitative descriptions) or by numerical class intervals 66
67 . This reduces voluminous raw data into manageable groups (e.g. frequency distributions).

• Tabulation: Arranging classified data into tables, often as simple frequency tables or cross-
tabulations. Tabulation summarizes data concisely, showing distributions and relationships between
variables (for example, a contingency table of responses by demographic group). Tabulation may be
done manually or by computer.

5
Each step aims to cleanse and organize data for analysis. Throughout, maintaining accuracy and
documentation of any changes is crucial.

• Data Analysis: After processing, analysis begins. It has two main facets: descriptive and inferential.
• Descriptive Analysis: Involves computing summary measures and visualizations for one or more
variables. Examples include measures of central tendency (mean, median, mode) and dispersion
(variance, standard deviation, range) 68 . Kothari notes that descriptive analysis profiles
distributions of variables and studies their size and shape 69 . Bivariate and multivariate descriptive
analysis (e.g. cross-tabulations, scatterplots) reveal relationships between variables (e.g. correlation
coefficients).
• Inferential (Statistical) Analysis: Goes beyond the sample to draw conclusions about the
population. It includes estimation (e.g. constructing confidence intervals for population parameters)
and hypothesis testing (determining if observed effects are statistically significant) 70 71 . Kothari
explains that inferential analysis uses sample data to make probabilistic statements (e.g. “with 95%
confidence, the true mean lies between…”). Key tools include tests of significance (t-tests, ANOVA,
chi-square, etc.) and confidence intervals. In research, such analyses answer “what can we infer
about the wider population given our sample?”

Kothari highlights the role of correlation vs. causal (regression) analysis 19 . Correlation analysis
measures the joint variation of two or more variables (e.g. Pearson’s r) to see if they move together 19 .
Causal (regression) analysis examines how one variable affects another (developing predictive equations).
He notes that experimental research focuses on causal analysis, while many social/business studies focus
on correlation to understand relationships.

• Role of Statistics: Statistics are tools for research design and analysis 72 . Kothari emphasizes that
large datasets must be reduced using statistical measures before interpretation 73 . Descriptive
statistics (summaries) and inferential statistics (generalizations) together enable researchers to form
and test conclusions.

In summary, after data collection we clean and organize the data (editing, coding, tabulation) and then
summarize and interpret it using statistical measures. This leads to answering the research questions, testing
hypotheses, and drawing conclusions.

Chapter 8: Sampling Fundamentals


Sampling theory underpins inference from a sample to the population. Key points include:

• Why Sample: Sampling saves time and money since studying an entire population (census) is often
impractical 74 . A properly drawn sample (when population is large or infinite) provides sufficient
accuracy faster. Sampling is also the only option when measurement destroys the item (e.g. crash
tests) or when the population is conceptually infinite 75 . Importantly, sampling allows estimation of
sampling error, giving confidence bounds on results 76 .

• Fundamental Definitions: Kothari defines universe/population as all items under study, and
sampling frame as the list from which the sample is drawn 24 77 . A statistic is a measure computed
from the sample (e.g. sample mean), whereas a parameter is the true population value (e.g.
population mean) 78 . The difference between them is what sampling inference addresses. Sampling

6
error is the random fluctuation of a sample statistic around the true parameter 79 . It decreases as
sample size increases and is smaller in homogeneous populations 79 . (Non-sampling errors – e.g.
measurement or non-response – can occur independently and are not eliminated by larger samples.)

• Sampling Distributions and Central Limit Theorem: As sample size grows, the distribution of the
sample mean (or proportion) becomes approximately normal, regardless of the population’s
distribution, by the Central Limit Theorem 80 . This allows use of normal probability for inference.
The standard deviation of the sampling distribution of the mean (the standard error) is σ/√n, where
σ is the population standard deviation 81 . If σ is unknown, the sample standard deviation can be
used to estimate it. Similarly, for a sample proportion p, the standard error is √(p q / n) 82 . These
formulas quantify how much a sample estimate will vary by chance.

• Estimation: Sample statistics are used to estimate parameters. For the population mean µ, the
sample mean is an unbiased estimator 80 . A confidence interval can be constructed, e.g.

ˉ ± zα/2 σ
X
n

where z is from the standard normal (≈1.96 for 95%). For example, if = 6.20, σ≈3.8, n=36, a 95% CI is
6.20 ± 1.96*(3.8/√36) = [4.96, 7.44] 81 83 . This means we can be 95% confident the true mean lies
in that interval. Similar intervals exist for proportions: p ± z p(1 − p)/n 84 . Kothari’s examples
illustrate these calculations (e.g. [52%, 76%] interval from a sample proportion of 64% 85 ).

• Sample Size Determination: A critical decision is choosing n. A sample too small lacks precision; too
large wastes resources. The sample size should be optimal: large enough to achieve desired
precision, but no larger. Factors influencing size include population heterogeneity (more variance
requires larger n) 86 , the number of groups or sub-classes needed, study purpose (detailed case
study vs broad survey), and sampling method (a well-chosen small random sample can outperform a
larger biased one) 86 . Also, the desired confidence level and margin of error dictate n: increasing
confidence or precision (narrower interval) requires a larger n 25 . Budget and practical constraints
further limit n 87 .

Kothari presents a standard approach: specify the acceptable error e for an estimate, then solve for n. For a
mean estimate with known σ, the formula is

σ zσ 2
e = zα/2 ⇒ n=( ) .
n e

For example, to estimate a mean within ±3 units with 95% confidence (z=1.96) and σ≈4.8, we get n ≈
(1.96 ∗ 4.8/3)2 88 . Similar formulas exist for proportions: n = (z 2 pq)/e2 when estimating a proportion
with error e. Kothari notes this precision-based approach is common; a Bayesian cost–benefit approach is
theoretically optimal but rarely used in practice 89 .

In summary, sampling fundamentals cover why and how to draw samples. Researchers use statistical
theory (CLT, standard error, confidence intervals) to link sample results to population conclusions. Properly

7
determining sample size and design is essential to obtaining valid, precise estimates from surveys and
experiments 74 80 .

All content above is based on authoritative research methodology texts 1 62 80 86 , ensuring comprehensive
coverage of Chapters 1–8.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89

Research_Methodology_C_R_Kothari.pdf
file://file-61Wc8hDt3mx7bTJyXeTen1

You might also like