Concept of
Monitoring and
Evaluation
ABRENICA, L. O.
BADINO, D. A.
OCTAVIO, R. P.
EMBUDO, J. D.
Module 12
Concept of Monitoring and Evaluation
By: Lialyn O. Abrenica, Derilyn A. Badino, Renan P. Octavio, Jemima D. Embudo
I. Overview
The Global Consultation on Agricultural Extension observed
that monitoring and evaluation are important yet frequently neglected
functions in most organizations. In the worldwide survey of national
extension systems, it was found that only about one-half of all
national extension systems have some type of monitoring and
evaluation (M & E) capacity. This module introduces you to the
concept of monitoring and evaluation.
II. Intended Learning Outcomes
At the end of the session, the students should be able to:
1. Identify the purpose of monitoring and evaluation
2. Reflect on the different types of monitoring and evaluation
studies; and;
3. Determine the complementary roles of monitoring and
evaluation.
III. Take off
Look at the image and reflect on why do we need monitoring and
evaluation in extension program.
Watch the video of Monitoring and Evaluation of Agricultural Project
through this link https://www.youtube.com/watch?v=H8gdxfbsu0M
1
IV. Learning Focus
What is Monitoring?
- The continuous, ongoing collection and review of information on program
implementation, coverage, and use for comparison with implementation
plans.
- A continuous process of collecting and analyzing information to compare
how well a project, program or policy is being implemented against
expected results.
- Supervising activities in progress to ensure they are on course and on
schedule in meeting the objectives and performance targets
What is Evaluation?
- A systematic process to determine the
extent to which service needs and
results have been or are being achieved
and analyze the reasons for any
discrepancy.
- The systematic collection of information
about activities, characteristics, and
outcomes of projects to make judgments
about the project, improve effectiveness,
and/or inform decisions about future
programming.
Evolution of Monitoring and Evaluation as a Discipline
- Program evaluation evolved primarily in the USA and became a semi-
professional discipline around 1960.
2
- At the end of the 19th century Joseph Rice conducted the first generally
recognized formal education evaluation in the USA.
- During the early part of the 20th century a focus upon systemization,
standardization and efficiency was seen in the field of education program
evaluation.
- While large-scale programs with an evaluation component became
commonplace in the US, this trend also occurred in Europe and
Australia.
- Responding to the need for more sophisticated evaluation models, by
the 1970s many new books and papers were published (Cronbach 1963,
Scriven 1967, Stake 1967).
- 1980-1990s: Era of accountability. In the US a new direction for
evaluation emerged in the 1980s with the presidency of Ronald Reagan
who led a backlash against government programming.
Table 1. Differences between monitoring and evaluation
Monitoring Evaluation
Definition Ongoing analysis of project Assessment of the efficiency,
progress towards achieving impact, relevance and
planned results with the sustainability of the project’s
purpose of improving actions.
management decision
making.
Who Internal management Usually incorporate external
responsibility agencies
When On Going Usually at completion but also
at mid-term, ex-post and
ongoing.
Why Check progress, take Learn broad lessons
remedial action, and update applicable to other programs
plans. and projects. Provides
accountability.
Source: EC, 2004, Project Cycle Management Guidelines
Table 2. Summary of differences between monitoring and evaluation
Monitoring Evaluation
Timing Continuous throughout the Periodic review at significant
project point in project progress – and
of project, midpoint of project,
change of phase
3
Scope Day to day activities, Assess overall delivery of
outputs, indicators of outputs and progress towards
progress and change objectives and goal
Main Project staff, project users External evaluators /
Participants facilitators, project users,
project staff, donors
Process Regular meetings, Extraordinary meetings,
interviews, monthly quarterly additional data collection
review, etc. exercises etc.
Written Regular reports and updates Written report with
Outputs to project users, recommendations for changes
management and donors to project – presented in
workshop to different
stakeholders
Figure 1. Monitoring and Evaluation in the Program and Project Cycle
4
Purpose of Monitoring and Evaluation
1. Provide constant feedback on the extent to which the projects are achieving
their goals.
2. Identify potential problems at an early stage and propose possible solutions.
3. Monitor the accessibility of the project to all sectors of the target population.
4. Monitor the efficiency with which the different components of the project are
being implemented and suggest improvements.
5. Evaluate the extent to which the project is able to achieve its general
objectives.
6. Provide guidelines for the planning of future projects.
7. Influence sector assistance strategy. Relevant analysis from project and
policy evaluation can highlight the outcomes of previous interventions, and
the strengths and weaknesses of their implementation.
8. Improve project design. Use of project design tools such as the logframe
(logical framework) results in systematic selection of indicators for
monitoring project performance. The process of selecting indicators for
monitoring is a test of the soundness of project objectives and can lead to
improvements in project design.
9. Incorporate views of stakeholders. Awareness is growing that participation
by project beneficiaries in design and implementation brings greater
“ownership” of project objectives and encourages the sustainability of
project benefits. Ownership brings accountability.
10. Show need for mid-course corrections. A reliable flow of information during
implementation enables managers to keep track of progress and adjust
operations to take account of experience (OED).
Types of M & E Studies
A needs assessment: Focuses on identifying needs of the target
audience, developing a rationale for a program, identifying needed inputs,
determining program content, and setting program goals. It answers the
question:
What do we need and why?
What does our audience expect from us?
5
What resources do we need for program implementation?
A baseline study: Establishes a benchmark from which to judge future
program or project impact. It answers the questions:
What is the current status of the program?
What is the current level of knowledge, skills, attitudes, and beliefs of
our audience?
What are our priority areas of intervention?
What are our existing resources?
A formative, process, or developmental evaluation: provides
information for program improvement, modification, and management. It
answers the question:
What are we supposed to be doing?
What are we doing?
How can we improve?
A summative, impact, or judgmental evaluation: Focuses on
determining the overall success, effectiveness, and accountability of the
program. It helps make major decisions about a program’s continuation,
expansion, reduction, and/or termination. It answers the questions:
What were the outcomes?
Who participated and how?
What were the costs?
A follow-up study: Examines long-term effects of a program. It answers
the questions:
What were the impacts of our program?
What was most useful to participants?
What are the long-term effects?
Table 3. The complimentary roles of monitoring and evaluation
Monitoring Evaluation
Routine collection of information Analyzing information
Tracking project implementation Ex-post assessment of effectiveness
progress and impact
Measuring efficiency Confirming project expectations
Question: “Is the project doing things Questions: “It the project doing the
right?” right things?”
Source: Alex and Bryerlee (2000)
Methods of Gathering Data for Monitoring and Evaluation
1. Interviews. These can be structured, semi-structured, or unstructured (see
Glossary of Terms). They involve asking specific questions aimed at getting
information that will enable indicators to be measured. Questions can be
open-ended or closed (yes/no answers). Can be a source of qualitative and
quantitative information.
6
2. Key informant interviews. These are interviews that are carried out with
specialists in a topic or someone who may be able to shed particular light
on the process.
3. Questionnaires. These are written questions that are used to get written
responses which, when analyzed will enable indicators to be measured.
4. Focus Group Discussion. In a focus group, a group of about six to 12 people
is interviewed together by a skilled interviewer/facilitator with a carefully
structured interview schedule. Questions are usually focused on a specific
topic or issue.
5. Community Meetings. This involves a gathering of a fairly large group of
beneficiaries to whom questions, problems, and situations are put for input
to help in measuring indicators.
6. Field Workers Report. Structured report forms that ensure that indicator-
related questions are asked and answers recorded and observations are
recorded on every visit.
7. Ranking. This involves getting people to say what they think is most useful,
most important, least useful, etc.
8. Visual/Audio Stimuli. These include pictures, movies, tapes, stories, role-
plays, and photographs, used to illustrate problems or issues or past events,
or even future events.
9. Rating Scales. People are usually asked to say whether they agree strongly,
agree, don’t know, disagree, or disagree strongly with a statement. You can
use pictures and symbols in this technique if people cannot read and write.
10. Critical Event/Incident Analysis. This method is a way of focusing interviews
with individuals or groups on particular events or incidents. The purpose of
doing this is to get a very full picture of what actually happened.
11. Participant Observation. This involves direct observation of events,
processes, relationships, and behaviors. “Participant” here implies that the
observer gets involved in activities rather than maintaining a distance.
12. Self Drawing. This involves getting participants to draw pictures, usually of
how they feel or think about something.
VII. References
AEXT392: Lecture 02: Extension Program Planning and Evaluation. Retrieved
from: http://eagri.org/eagri50/AEXT392/lec02.html
Sanchez, J. and Undong, M. 2016. Planning, Managing, Monitoring and
Evaluation. CED 240-WY. UP Los Baños.