INTRODUCTION TO M&E
By the End of this module the learners should ;
1. Demonstrate an understanding of M&E concepts , methods and processes
2. Demonstrate an understanding of project management
3. Demonstrate an understanding of qualitative and quantitative data in monitoring and
evaluation
4. Describe monitoring and evaluation framework
INTRODUCTION TO M&E
Monitoring and Evaluation (M&E) is a powerful management tool that helps policy makers,
decision makers, program and project implementers/managers and all other stakeholders to:
Track progress and provide evidence on the performance and impact of a given project, program,
or policy. Provide feedback based on evidence to improve ongoing and future implementation
and impact of a project, program, or policy.
1.Operational definition and terminologies in monitoring and evaluation
1. Accountability—responsibility for the use of resources and the decisions made, as well
as the obligation to demonstrate that work has been done in compliance with agreed-upon
rules and standards and to report fairly and accurately on performance results vis-a-vis
mandated roles and/or plans.
2. Activity—actions taken or work performed through which inputs such as funds,
technical assistance, and other types of resources are mobilized to produce specific
outputs.
3. Assumptions—hypotheses about factors or risks which could affect the progress or
success of an intervention. Intervention results depend on whether or not the assumptions
made, prove to be correct.
4. Attribution—the ascription of a causal link between observed changes and a specific
intervention.
5. Audit—an independent, objective quality assurance activity designed to add value and
improve an organization’s operations. It helps an organization accomplish its objectives
by bringing a systematic, disciplined approach to assess and improve the effectiveness of
risk management, control and governance processes. Note: Internal auditing is conducted
by a unit reporting to management, while external auditing is conducted by an
independent organization.
6. Baseline—the status of services and outcome-related measures such as knowledge,
attitudes, norms, behaviors, and conditions before an intervention, against which progress
can be assessed or comparisons made.
7. Benchmark—a reference point or standard against which performance or achievements
can be assessed. Note: A benchmark refers to the performance that has been achieved in
the recent past by other comparable organizations, or what can be reasonably inferred to
have been achieved in similar circumstances.
8. Beneficiaries—the individuals, groups, or organizations, whether targeted or not, that
benefit directly or indirectly, from the intervention.
9. Case study—a methodological approach that describes a situation, individual, or the like
and that typically incorporates data-gathering activities (e.g., interviews, observations,
questionnaires) at selected sites or programs/projects. Case studies are characterized by
purposive selection of sites or small samples; the expectation of generalizability is less
than that in many other forms of research. The findings are used to report to stakeholders,
make recommendations for program/project improvement, and share lessons learned.
10. Conclusions—point out the factors of success and failure of the evaluated intervention,
with special attention paid to the intended and unintended results, and more generally to
any other strength or weakness. A conclusion draws on data collection and analysis
undertaken through a transparent chain of arguments.
11. Coverage—the extent to which a program/intervention is being implemented in the right
places (geographic coverage) and is reaching its intended target population (individual
coverage).
12. Data—specific quantitative and qualitative information or facts that are collected and
analyzed.
13. Economic evaluation—use applied analytical techniques to identify, measure, value and
compare the costs and outcomes of alternative interventions. Types of economic
evaluations include cost-benefit, cost-effectiveness, cost-efficiency evaluations.
14. Effectiveness—the extent to which a program/intervention has achieved its objectives
under normal conditions in a real-life setting.
15. Efficacy— the extent to which an intervention produces the expected results under ideal
conditions in a controlled environment.
16. Efficiency—a measure of how economically inputs (resources such as funds, expertise,
time) are converted into results.
17. Epidemiology—the study of the magnitude, distribution and determinants of health-
related conditions in specific populations, and the application of the results to control
health problems.
18. Evaluability—extent to which an intervention or program/intervention can be evaluated
in a reliable and credible fashion.
19. Evaluation—the rigorous, scientifically-based collection of information about
program/intervention activities, characteristics, and outcomes that determine the merit or
worth of the program/intervention. Evaluation studies provide credible information for
use in improving programs/interventions, identifying lessons learned, and informing
decisions about future resource allocation.
20. Facility survey—a survey of a representative sample of facilities that generally aims to
assess the readiness of all elements required to provide services and other aspects of
quality of care (e.g., basic infrastructure, drugs, equipment, test kits, client registers,
trained staff). The units of observation are facilities of various types and levels in the
same health system. The content of the survey may vary but typically includes a facility
inventory and, sometimes, health worker interviews, client exit interviews, and client-
provider observations.
21. Findings—Factual statements based on evidence from one or more evaluations.
22. Formative evaluation—a type of evaluation intended to improve the performance of a
program or intervention. A formative evaluation is usually undertaken during the design
and pre-testing of the intervention or program, but it can also be conducted early in the
implementation phase, particularly if implementation activities are not going as expected.
23. Generalizability—the extent to which findings can be assumed to be true for the entire
target population, not just the sample of the population under study. Note: To ensure
generalizability, the sampling procedure and the data collected need to meet certain
methodological standards.
24. Goal—a broad statement of a desired, usually longer-term, outcome of a
program/intervention. Goals express general program/intervention intentions and help
guide the development of a program/intervention. Each goal has a set of related, specific
objectives that, if met, will collectively permit the achievement of the stated goal. Health
information system
25. (HIS)—a data system, usually computerized, that routinely collects and reports
information about the delivery and cost of health services, and patient demographics and
health status.
26. Impact—the long-term, cumulative effect of programs/interventions over time on what
they ultimately aim to change, such as a change in HIV infection, AIDS-related
morbidity and mortality. Note: Impacts at a population-level are rarely attributable to a
single program/intervention, but a specific program/intervention may, together with other
programs/interventions, contribute to impacts on a population.
27. Impact evaluation—a type of evaluation that assesses the rise and fall of impacts, such
as disease prevalence and incidence, as a function of HIV programs/interventions.
Impacts on a population seldom can be attributed to a single program/intervention;
therefore, an evaluation of impacts on a population generally entails a rigorous design
that assesses the combined effects of a number of programs/interventions for at-risk
populations.
28. Impact monitoring—tracking of health-related events, such as the prevalence or
incidence of a particular disease; in the field of public health, impact monitoring is
usually referred to as “surveillance”.
29. Incidence—the number of new cases of a disease that occur in a specified population
during a specified time period.
30. Indicator—a quantitative or qualitative variable that provides a valid and reliable way to
measure achievement, assess performance, or reflect changes connected to an
intervention. Note: Single indicators are limited in their utility for understanding program
effects (i.e., what is working or is not working, and why?). Indicator data should be
collected and interpreted as part of a set of indicators. Indicator sets alone can not
determine the effectiveness of a program or collection of programs; for this, good
evaluation designs are necessary.
31. Inputs—the financial, human, and material resources used in a
program/intervention.Input and output monitoring—tracking of information about
program/intervention inputs (i.e., resources used in the program/intervention) and
program/intervention outputs (i.e., results of the program/intervention activities). Note:
Data on inputs and outputs usually exist in program/intervention documentation (e.g.,
activity reports, logs) and client records which compile information about the time, place,
type and amount of services delivered, and about the clients receiving the services.
32. Internal evaluation—an evaluation of an intervention conducted by a unit and/or
individuals who report to the management of the organization responsible for the
financial support, design and/or implementation of the intervention.
33. Intervention—a specific activity or set of activities intended to bring about change in
some aspect(s) of the status of the target population (e.g., HIV risk reduction, improving
the quality of service delivery).
34. Lessons learned—generalizations based on evaluation experiences with programs,
interventions or policies that abstract from the specific circumstances to broader
situations. Frequently, lessons highlight strengths or weaknesses in preparation, design,
and implementation that affect performance, outcome, and impact.
35. Logical framework—management tool used to improve the design of interventions. It
involves identifying strategic elements (inputs, outputs, activities, outcomes, impact) and
their causal relationships, indicators, and the assumptions of risks that may influence
success and failure. It thus facilitates planning, execution, and monitoring and evaluation
of an intervention.
36. Meta-evaluation—a type of evaluation designed to aggregate findings from a series of
evaluations. It can also be used to denote the evaluation of an evaluation to judge its
quality and/or assess the performance of the evaluators.
37. M&E plan—a multi-year implementation strategy for the collection, analysis and use of
data needed for program / project management and accountability purposes. The plan
describes the data needs linked to a specific program / project; the M&E activities that
need to be undertaken to satisfy the data needs and the specific data collection procedures
and tools.
38. M&E work plan—an annual costed M&E plan that describes the priority M&E
activities for the year and the roles and responsibilities of organizations / individuals for
their implementation; the cost of each activity and the funding identified; a timeline for
delivery of all products / outputs.
39. Objective—a statement of a desired program/intervention result that meets the criteria of
being Specific, Measurable, Achievable, Realistic, and Time-phased (SMART).
40. Outcome—short-term and medium-term effect of an intervention’s outputs, such as
change in knowledge, attitudes, beliefs, behaviors.
41. Outcome evaluation—a type of evaluation that determines if, and by how much,
intervention activities or services achieved their intended outcomes. An outcome
evaluation attempts to attribute observed changes to the intervention tested. Note: An
outcome evaluation is methodologically rigorous and generally requires a comparative
element in its design, such as a control or comparison group, although it is possible to use
statistical techniques in some instances when control/comparison groups are not available
(e.g., for the evaluation of a national program).
42. Performance—the degree to which an intervention or organization operates according
to specific criteria/standards/guidelines or achieves results in accordance with stated
goals or plans.
43. Population-based survey—a type of survey which is statistically representative of the
target population, such as the AIDS Indicator Survey (AIS), the Demographic and Health
Survey (DHS).
44. Prevalence—the total number of persons living with a specific disease or condition at a
given time.
45. Process evaluation—a type of evaluation that focuses on program/intervention
implementation, including, but not limited to access to services, whether services reach
the intended population, how services are delivered, client satisfaction and perceptions
about needs and services, management practices.
46. Program—A program generally includes a set of interventions marshaled to attain
specific global, regional, country, or subnational objectives; involves multiple activities
that may cut across sectors, themes and/or geographic areas.
47. Project—an intervention designed to achieve specific objectives within specified
resources and implementation schedules, often within the framework of a broader
program.
48. Qualitative data—data collected using qualitative methods, such as interviews, focus
groups, observation, and key informant interviews. Qualitative data can provide an
understanding of social situations and interaction, as well as people’s values, perceptions,
motivations, and reactions. Qualitative data are generally expressed in narrative form,
pictures or objects (i.e., not numerically). Note: The aim of a qualitative study is to
provide a complete, detailed description.
49. Quality assurance—planned and systematic processes concerned with assessing and
improving the merit or worth of an intervention or its compliance with given standards.
Note: Examples of quality assurance activities include appraisal, results based
management reviews, evaluations.
50. Quantitative data—data collected using quantitative methods, such as surveys.
Quantitative data are measured on a numerical scale, can be analysed using statistical
methods, and can be displayed using tables, charts, histograms and graphs. Note: The aim
of a quantitative study is to classify features, count them, and construct statistical models
in an attempt to explain what is observed.
51. Relevance—the extent to which the objectives, outputs, or outcomes of an intervention
are consistent with beneficiaries’ requirements, organisations’ policies, country needs,
and/or global priorities.
52. Reliability—consistency or dependability of data collected through the repeated use of a
scientific instrument or a data collection procedure used under the same conditions.
53. Research—a study which intends to generate or contribute to generalizable knowledge to
improve public health practice, i.e., the study intends to generate new information that
has relevance beyond the population or program from which data are collected. Research
typically attempts to make statements about how the different variables under study, in
controlled circumstances, affect one another at a given point in time.
54. Results—the outputs, outcomes, or impacts (intended or unintended, positive and/or
negative) of an intervention.
55. Stakeholder—a person, group, or entity who has a direct or indirect role and interest in
the goals or objectives and implementation of a program/intervention and/or its
evaluation.
56. Summative evaluation—a type of evaluation conducted at the end of an intervention (or
a phase of that intervention) to determine the extent to which anticipated outcomes were
produced. It is designed to provide information about the merit or worth of the
intervention.
57. Surveillance—the ongoing, systematic collection, analysis, interpretation, and
dissemination of data regarding a health-related event for use in public health action to
reduce morbidity and mortality and to improve health. Surveillance data can help predict
future trends and target needed prevention and treatment programs.
58. Target—the objective a program/intervention is working towards, expressed as a
measurable value; the desired value for an indicator at a particular point in time. Target
group—specific group of people who are to benefit from the result of the intervention.
59. Terms of reference (TOR) (of an evaluation)—written document presenting the
purpose and scope of the evaluation, the methods to be used, the standards against which
performance is to be assessed or analyses to be conducted, the resources and time
allocated, and the reporting requirements.
Monitoring
1. Is a continuous assessment of project implementation in relation to targeted outputs
2. Is a routine/daily assessment of ongoing activities and progress
3. It is also a continuous process that uses systematic collection of data on specified
indicators to provide management and the main stakeholders of an outgoing development
intervention with indications of the extent of progress and achievement of objectives and
progress in the use of allocated funds
Evaluation-
Evaluation is the periodic assessment of the programs/projects activities
Evaluation is assessing, as systematically and objectively as possible, a completed project or
programme (or a phase of an ongoing project or programme that has been completed).
Evaluations appraise data and information that inform strategic decisions, thus improving the
project or programme in the future.
Evaluations is a rigorous analysis of completed or ongoing activities that determine or support
management to draw conclusions about five main aspects of the intervention
I. Relevance
II. Effectiveness
III. Efficiency
IV. Impact
V. Sustainability
2.Difference between Monitoring and Evaluation
Monitoring Evaluation
Monitoring is the systematic and routine Evaluation is the periodic assessment of the
collection of information about the programs/projects activities
programs/projects activities
It is done on a periodic basis to measure the
It is ongoing process which is done to see if success against the objective
things/activities are going on track or not
Monitoring is done usually by the internal Evaluation is done mainly by the external
members of the team members.
It compares the current progress with the It looks at the achievement of the programs
planned progress along with both positive/negative,
intended/unintended effects
Information obtained from monitoring is more Information obtained from evaluation is
useful to the implementation/management useful to all the stakeholders
team
Monitoring result is used for informed actions Evaluation result is used for planning of new
and decisions programs and interventions
Answers the question “Are we doing things Answers the question “Are we doing right
right?” thing?”
3.Importance of M&E
Monitoring and evaluation are essential to any project or program. Through this process,
organizations collect and analyze data, and determine if a project/program has fulfilled its goals.
Monitoring begins right away and extends through the duration of the project. Evaluation comes
after and assesses how well the program performed. Every organization should have an M&E
system in place. Here are ten reasons why:
1. M&E results in better transparency and accountability-Because organizations track,
analyze, and report on a project during the monitoring phase, there’s more transparency.
Information is freely circulated and available to stakeholders, which gives them more
input on the project. A good monitoring system ensures no one is left in the dark. This
transparency leads to better accountability. With information so available, organizations
need to keep everything above board. It’s also much harder to deceive stakeholders.
2. M&E helps organizations catch problems early-Projects never go perfectly according
to plan, but a well-designed M&E helps the project stay on track and perform well. M&E
plans help define a project’s scope, establish interventions when things go wrong, and
give everyone an idea of how those interventions affect the rest of the project. This way,
when problems inevitably arise, a quick and effective solution can be implemented.
3. M&E helps ensure resources are used efficiently-Every project needs resources. How
much cash is on hand determines things like how many people work on a project, the
project’s scope, and what solutions are available if things get off course. The information
collected through monitoring reveals gaps or issues, which require resources to address.
Without M&E, it wouldn’t be clear what areas need to be a priority. Resources could
easily be wasted in one area that isn’t the source of the issue. Monitoring and evaluation
helps prevent that waste.
4. M&E helps organizations learn from their mistakes-Mistakes and failures are part of
every organization. M&E provides a detailed blueprint of everything that went right and
everything that went wrong during a project. Thorough M&E documents and templates
allow organizations to pinpoint specific failures, as opposed to just guessing what caused
problems. Often, organizations can learn more from their mistakes than from their
successes.
5. M&E improves decision-making-Data should drive decisions. M&E processes provide
the essential information needed to see the big picture. After a project wraps up, an
organization with good M&E can identify mistakes, successes, and things that can be
adapted and replicated for future projects. Decision-making is then influenced by what
was learned through past monitoring and evaluation.
6. M&E helps organizations stay organized-Developing a good M&E plan requires a lot
of organization. That process in itself is very helpful to an organization. It has to develop
methods to collect, distribute, and analyze information. Developing M&E plans also
requires organizations to decide on desired outcomes, how to measure success, and how
to adapt as the project goes on, so those outcomes become a reality. Good organizational
skills benefit every area of an organization.
7. M&E helps organizations replicate the best projects/programs-Organizations don’t
like to waste time on projects or programs that go nowhere or fail to meet certain
standards. The benefits of M&E that we’ve described above – such as catching problems
early, good resource management, and informed decisions – all result in information that
ensures organizations replicate what’s working and let go of what’s not.
8. M&E encourages innovation-Monitoring and evaluation can help fuel innovative
thinking and methods for data collection. While some fields require specific methods,
others are open to more unique ideas. As an example, fields that have traditionally relied
on standardized tools like questionnaires, focus groups, interviews, and so on can branch
out to video and photo documentation, storytelling, and even fine arts. Innovative tools
provide new perspectives on data and new ways to measure success.
9. M&E encourages diversity of thought and opinions-With monitoring and evaluation,
the more information the better. Every team member offers an important perspective on
how a project or program is doing. Encouraging diversity of thought and exploring new
ways of obtaining feedback enhance the benefits of M&E. With M&E tools like surveys,
they’re only truly useful if they include a wide range of people and responses. In good
monitoring and evaluation plans, all voices are important.
10. Every organization benefits from M&E-While certain organizations can use more
unique M&E tools, all organizations need some kind of monitoring and evaluation
system. Whether it’s a small business, corporation, or government agency, all
organizations need a way to monitor their projects and determine if they’re successful.
Without strong M&E, organizations aren’t sustainable, they’re more vulnerable to failure,
and they can lose the trust of stakeholders.
4.Key concepts of M&E
1. Indicators: Indicators are specific, measurable, and observable variables used to assess
progress or achievement. They provide evidence of the extent to which objectives are
being met.Example: Number of participants, percentage increase, level of satisfaction.
2. Objectives: Objectives are clear and specific statements that describe the intended
outcomes of a program or project. They articulate what the intervention aims to
achieve.Example: Increase literacy rates by 20% within the target population.
3. Goals: Goals are broad, overarching statements that represent the overall purpose or
desired long-term outcome of a program or organization.Example: Improve the quality of
life for marginalized communities.
4. Inputs: Inputs refer to the resources, both human and material, invested in a program or
project to facilitate the implementation of activities.Example: Funding, personnel,
equipment, time.
5. Process: Process refers to the activities, tasks, or steps undertaken to implement a
program or project. It focuses on how inputs are transformed into outputs.Example:
Training sessions, workshops, community outreach.
6. Output: Outputs are the direct and tangible results or products of a program or project.
They represent the immediate outcomes of activities.Example: Number of trained
individuals, manuals produced.
7. Outcome: Outcomes are the intended or unintended effects or changes that result from
the outputs. They reflect changes in behavior, knowledge, attitudes, or
conditions.Example: Increased community awareness, improved health practices.
8. Impact: Impact represents the broader and long-term effects or changes that can be
attributed to the overall intervention. It goes beyond immediate outcomes to assess the
broader societal or systemic influence.Example: Reduced poverty rates, enhanced quality
of life.
9. Efficiency: Efficiency refers to the relationship between the inputs invested in a program
or project and the outputs or outcomes achieved. It assesses how well resources are
utilized to produce results.Example: Cost per participant trained, time spent per activity.
5.Types of Monitoring
Types of monitoring A project/programme usually monitors a variety of things according to its
specific informational needs. These monitoring types often occur simultaneously as part of an
overall monitoring system commonly found in a project/programme monitoring system. TAB 1:
Common types of monitoring
● Results monitoring: Tracks effects and impacts to determine if the project/programme
is on target towards its intended results (inputs, activity, outputs, outcomes, impact,
assumptions/risks monitoring) and whether there may be any unintended impact (positive
or negative
● Process (activity) monitoring : Tracks the use of inputs and resources, the progress of
activities, how activities are delivered – the efficiency in time and resources and the
delivery of outputs
● Compliance monitoring: Ensures compliance with, say, donor regulations and
expected results, grant and contract requirements, local governmental regulations and
laws, and ethical standards.
● Context (situation) monitoring: Tracks the setting in which the project/programme
operates, especially as it affects identified risks and assumptions, and any unexpected
considerations that may arise, including the larger political, institutional, funding, and
policy context that affect the project/programme.
● Beneficiary monitoring: Tracks beneficiary perceptions of a project/programme. It
includes beneficiary satisfaction or complaints with the project/programme, including
their participation, treatment, access to resources and their overall experience of change.
● Financial monitoring: Accounts for costs by input and activity within predefined
categories of expenditure, to ensure implementation is according to the budget and time
frame.
● Organizational monitoring: Tracks the sustainability, institutional development and
capacity building in the project/programme and with its partners.
6.Types of Evaluation
In Monitoring and Evaluation (M&E), there are several types of evaluation, each serving specific
purposes and providing insights into different aspects of a program or project. The major types of
evaluation include:
● Formative Evaluation: Conducted during the planning and implementation phases to
provide feedback for improving program design and effectiveness,Emphasizes
understanding the processes and identifying areas for improvement.
● Summative Evaluation:Conducted at the end of a program or project to assess overall
effectiveness, outcomes, and impact, Provides an overall judgment on whether the
objectives were achieved and the extent of success.
● Process (or Implementation) Evaluation:Focuses on the implementation of program
activities to assess fidelity, quality, and adherence to the planned processes., Examines
how well the program is being carried out, identifies challenges, and suggests
improvements.
● Impact Evaluation: Assesses the broader and often longer-term effects of a program or
project, attributing changes to the intervention,Explores the causal relationship between
the intervention and observed outcomes.
● Outcome Evaluation: Examines the immediate and intermediate results or outcomes of a
program to determine its effectiveness, Measures the extent to which the program's
objectives have been achieved.
● Process Summative Evaluation: Combines elements of both process and summative
evaluations to assess both implementation processes and overall program effectiveness,
Examines the quality of program implementation and its impact on outcomes.
● Developmental (or Utilization-Focused) Evaluation:Conducted with a focus on
supporting ongoing program development and adaptation, emphasizing real-time
feedback,Emphasizes learning, adaptation, and the utilization of findings for decision-
making.
● Cost-Benefit and Cost-Effectiveness Evaluation:Assesses the economic efficiency of a
program by comparing costs to benefits or outcomes achieved.Explores the economic
aspects of the program, helping to prioritize resource allocation.
● Ex Post Facto Evaluation: Conducted retrospectively after a program has been
completed to assess its impact and outcomes. Examines the long-term effects and
sustainability of the program.
● Meta-Evaluation: Evaluates the quality and relevance of existing evaluations to
determine their validity and reliability. Assesses the methods, findings, and overall utility
of previous evaluations.
● Joint Evaluation:Involves collaboration between multiple stakeholders or organizations
in the evaluation process. Enhances objectivity, credibility, and inclusivity in the
evaluation process.
The choice of evaluation type depends on the specific needs, goals, and context of the program
or project being evaluated. Different types of evaluation can be used at various stages of the
project life cycle to ensure a comprehensive understanding of its effectiveness and impact.
6. Best Practices in Monitoring
• Data collected should be of high quality to inform monitoring
• Monitoring data should be well-focused to specific audiences and uses
(necessary and sufficient).
• Monitoring should be systematic, based upon predetermined indicators
and assumptions.
• Monitoring should also look for unanticipated changes and the
information used to adjust implementation plans.
• Monitoring needs to be timely, so information can be readily used to
inform project/programme implementation.
• Whenever possible, monitoring should be participatory, involving key
• Monitoring information should be shared with beneficiaries, donors and
any other relevant stakeholders.
7. Challenges and risks in M&E Practices
● Limited Financial and Human Resources: NGOs and organizations, particularly smaller
ones, frequently grapple with constrained financial and human resources. The
establishment of comprehensive Monitoring and Evaluation (M&E) systems necessitates
well-trained personnel, access to technology, and adequate funding. The absence of these
resources can impede the processes of data collection, analysis, and reporting.
● Insufficient Data Quality and Reliability: The task of gathering high-quality data can be
challenging for NGOs and organizations. Inadequate data quality, inaccuracies, and
inconsistencies pose a risk to the credibility of M&E findings, potentially undermining
the effectiveness of decision-making.
● Lack of Clear Indicators and Targets: In the absence of well-defined indicators and
targets, NGOs may encounter difficulties in measuring progress effectively. Unclear
objectives can lead to vague M&E efforts, making it challenging to assess the success of
programs.
● Time Constraints for M&E Activities: NGOs frequently engage in multiple projects
concurrently, leaving limited time for M&E activities. This time constraint may result in
hurried data collection and superficial analysis, diminishing the overall efficacy of M&E
efforts.