3: Project Control
Project Control is the system in place used to identify project variations from the approved plan
after which corrective actions are identified and implemented in accordance with stakeholder
preferences.
It encompasses the people, processes and tools used to plan, manage and mitigate cost and
schedule issues and any risk events that may impact a project. Project control begins early in the
project with planning and ends late in the project with post-implementation review, having a
thorough involvement of each step in the process. Each project should be assessed for the
appropriate level of control needed: because too much control is too time consuming, too little
control is very risky.
Key elements of control
There are four key elements in a control system as depicted in Figure below;
(Source: David I. Cleland, Project Management: Strategic Design and Implementation, 3rd ed.
(New York, NY: McGraw-Hill, 1999), p. 325.)
Step 1: Establishing Performance Standards
Establishing performance standards is when objectives are set during the design process. They
are standard guidelines established as the basis for measurement. It is a precise, explicit
statement of expected results from a product, service, machine, individual, or organizational unit.
It is usually expressed numerically and is set for quality, quantity, and time. Some of the sub1
controls in this step: time controls, material controls, equipment controls, cost controls, and
budget controls, financial controls, and operations controls
Step 2: Observing and Measuring Performance
Project supervisors collect data to measure actual performance and determine variation from the
standard. Personal observation, statistical reports, or oral reports can be used to measure
performance. For instance, observation of employees working provides hands on information,
extensive coverage, and the ability to read between the lines.
Step 3: Comparing Results
Results are compared with the standards to discover variations. Some variation can be expected
in most of the activities and the range of variation has to be established. Management usually lets
operations continue as long as they are within the defined control limits. Deviations that exceed
this range alerts the manager to a problem and leads to the next step.
Step 4: Corrective Action
Taking corrective action is when a supervisor finds the cause of the deviation, then he or she
takes action to remove or minimize the cause. If the source of the variation in performance is
from a deficit activity, then the supervisor can take immediate corrective action and get
performance back on track. Also, the manager can opt to take basic corrective action, which
determines how and why performance has deviated, and correct the source of the deviation.
Immediate corrective action is more efficient, while basic corrective action is more effective
Three types of control processes
Preliminary control- a type of control that identifies major problems before they occur. It is preemptive and focuses on prevention of deviation in planning, organizing, staffing and motivating
by assuring that every possible malfunction has been taken care of. The controller does not wait
for a record of a hit or a miss to start controlling. He/she tries to eliminate everything that might
go wrong. An example is scheduled maintenance of machinery and vehicles to prevent
breakdowns which might affect production.
2
Concurrent control- This form of control endeavors to monitor the operation in progress. Under
this type, work may not proceed to the next step unless it passes the screening test. Controls of
this nature are essentially safety devices. For example, the controls of spoiled food served in
airlines are so serious that controllers take extra precaution to make sure the quality (of food) is
up to certain specifications
Post-Action control in this type, control is carried out after the event. This is the poorest form
of control because it exists only for the improvement of next attempt. Post-control methods
include analysis of budget, financial statements
Project Change
Change- in the context of a project is any modification to the boundaries that had been
previously agree on such as benefit, scope, time or cost targets
Reasons for project change
Changes may result from;
Change in project needs/requirements driven by project sponsor or other stakeholders
Change in business environment (e.g. economic, social factors, competitors actions)
Problems/opportunities which occur during the course of project implementation
Modifications/enhancements identified by the project team
Faults detected by the project team/users
Configuration Management (sometimes called version control or change control)- Technical
and administrative control of the multiple versions or editions of a specific deliverable,
particularly where the component has been changed after it was initially completed. Most
typically this applies to objects, modules, data definitions and documentation.
Scope change- Where a request is considered to change the agreed scope and objectives of the
project to accommodate a need not originally defined to be part of the project.
Scope creep (also called requirement creep)- refers to uncontrolled changes or continuous
growth in a project's scope. This phenomenon can occur when the scope of a project is not
properly defined, documented, or controlled. It is generally considered a negative occurrence,
and therefore should be avoided. Typically, the scope increase consists of either new products or
new features of an already approved project, without corresponding increases in resources, time,
or budget. As a result, the project team risks drifting away from its original purpose and scope
into unplanned additions. As the scope of a project grows, more tasks must be completed within
the budget and schedule originally designed for a smaller set of tasks. Accordingly, scope creep
can result in a project team overrunning its original budget and schedule.
Factors that could contribute to the occurrence of a scope creep include;
disingenuous customer/beneficiary with a determined value for free policy
poor change control
lack of proper initial identification of what is required to realize the project objectives
weak project manager or sponsor
poor communication between stakeholders
Controlling as a management function
Controlling is an important function in management and when done well, controlling;
Ensures that the overall directions of individuals and groups are consistent with short and
long term plans of the institution or organization.
It helps ensure that objectives and accomplishments are consistent with one another
throughout the organization.
It helps maintain compliance with essential organizational rules and policies.
Change Control Process
This is a formal process that ensures all changes made to a project are brought about in a
controlled and coordinated way that reduces any disruption to ongoing project activity and
remains cost effective without placing a large requirement on generally scarce resources.
Refer to handout on change control process Project Management Workout 3rd Edn. 2005.
Robert Buttrick (page 427)-hand out 5, and then discuss the following;
The six-step change control process is as follows;
1. Request for Change is produced
This can be on a standard template, via email, or in the form of a Can you just request. A
Request for Change (RFC) can be formal or informal. The point is that you must have a request
submitted in some form.
2. Receive RFC
The RFC will normally go to the project manager. It could, however, be given to any member of
the project team. Make sure that your team knows what to do with change requests when are
asked to incorporate a change. Changes could also go to the program manager if you are working
on a program, as they may have implications for other projects.
Another option is that RFCs go directly to the Project Management Office. The team is likely to
have to delegate the next step to you as they wont have the detail, but they can act as a
management and administrative help in filtering the requests especially if your project has a lot
of changes.
3. Analyze RFC
You and the team will have to analyze the RFC. Consider the impact on the project scope,
budget, timescales and resources.
4. Make the decision
You may want to set limits and guidelines around decision making. For example, if the change
can be delivered with no change to budget, the manager could be authorized to make that
decision yourself. If the change requires additional budget or resources, the sponsor may have to
make the final decision, based on the recommendation from the manager.
If you reject the change, tell the person who submitted the RFC. Explain why the change was
rejected. This step is often overlooked and it can be a cause of great resentment amongst the
stakeholder community. Taking the time to explain why you could not accommodate their
change will help them accept the decision. They may still choose to challenge it, or they may
resubmit a modified version of their RFC at a later date.
If the change request is rejected, the process stops here. If the change request is accepted and
authorized, move on to the next step.
5. Incorporate the change into the plans
Changes, by their nature, change things! So you will need to update your project plan, schedule
in whatever tool you use resource plan, budget and anything else that is impacted as a result of
this change. Update all your project paperwork as well.
6. Inform the requestor
Inform the person who raised the RFC that their change has been approved. This closes the loop
and gives them some feedback. You may also choose to tell them about any increase in resources
required to deliver the change, so that they have the whole picture about how much impact this
change has had.
Sample Change Request Form
Change Request
Project:
Project code :
Date:
Change Requester
Change No.
Change Category ( Check all that apply)
Schedule
OTesting/Quality
Cost
Scope
Requirement/Deliverables
Resources
Does this Change Affect ( Check all that apply)
Corrective Action
Preventive Action
Defect Repair
Updates
OOther
Describe this Change Being Requested:
Describe the Reason for Change:
Describe any Technical Changes Required to Implement this Change:
Describe Risks to be Considered for this Change:
Estimate Resources and Costs Needed to Implement this Change:
Disposition:
Approve
Reject
Defer
7
Requirement/Deliverables
Justification of Approval, Rejection, or Deferral:
Change Board Approval
Name:
Signature:
Date:
Sample Corrective Action Form
Name of person checking and affiliation:
Date checked
Problem Identified
Recommended Action
Decision
Person responsible for Correction:
Date of Correction:
Explanation
4: Project Evaluation
Main purposes of evaluation;
Document program accomplishments
Guide decisions about budgets,
Communicate with the public and with political decision makers,
Identify strengths and weaknesses of a program, and then determine whether or not to
repeat or continue a program.
Demonstrate program effectiveness to funders
Justify current program funding and support the need for increased levels of funding
Satisfy ethical responsibility to clients to demonstrate positive and negative effects of
program participation
Document program development to help ensure successful replication
When should evaluation be conducted?
Evaluation data should be collected at several (and sometimes all) stages of a program or project.
These stages include;
The design stage. When information is collected before a project begins, it is called a needs
assessment. Knowing the needs of a target audience helps determine desired outcomes.
The start-up stage. Evaluation information gathered at the beginning of a program or project
helps establish a baseline to which changes can later be compared. This usually involves
conducting a pre-test or other ways of gathering information about existing conditions.
While the program or project is in progress. Collecting information during a program or project
helps managers determine if adjustments are needed.
After the program wraps up. A summative evaluation sums up what has occurred in the
project, asks for end-of-project reactions, and assesses success in meeting objectives. It is
typically used for accountability purposes.
Long after the program finishes. This stage of evaluation looks at the long-term benefits of a
program.
Before any project or program can be evaluated, it is critical that objectives and desired results of
the program or project be clear. Ideally, expected outcomes will be identified before the program
begins; however, in reality this is not always the case. At the very least, a program should have
clear objectives by the time an evaluation is conducted so evaluators know what standards to use
in judging a program, project, or policy.
10
Types of evaluations
There is a range of evaluation types, which can be categorized in a variety of ways.
Ultimately, the approach and method used in an evaluation is determined by the audience and purpose of the evaluation. The table
below summarizes key evaluation types according to three general categories. It is important to remember that the categories and types
of evaluation are not mutually exclusive and are often used in combination. For instance, a final external evaluation is a type of
summative evaluation and may use participatory approaches.
According to evaluation timing
According to who conducts the
According to evaluation
evaluation
technicality or methodology
Ex ante evaluation
Internal or self-evaluations
Real-time evaluations (RTEs)
This is the initial scoping study or position audit, which describes the
are conducted by those responsible
are undertaken during
project or programme's starting environment and provides the baseline
for implementing a
project/programme
information for future evaluation. Examples of Ex-ante evaluations
project/programme. They can be
implementation to
include; baseline evaluation, rapid assessment
less expensive than external
provide immediate feedback for
Process evaluations occur during project/programme
evaluations and help build staff
modifications to improve
implementation to improve performance and assess
capacity and ownership. However,
ongoing implementation.
compliance. Evaluations of this kind are directed toward understanding and
they may lack credibility with
Emphasis is on immediate
documenting program implementation. They answer questions about the
certain stakeholders, such
lesson learning over impact
types and quantities of services delivered, the beneficiaries of those services,
as donors, as they are perceived as
evaluation or accountability.
the resources used to deliver the services, the practical problems
more subjective (biased or one-
Meta-evaluations are
encountered, and the ways such problems were resolved. Information form
sided). These tend to be focused
used to assess the evaluation
11
process evaluations is useful for understanding how program impact and
on learning lessons rather than
process itself. Some key uses of
outcome were achieved and for program replication. Process evaluations are
demonstrating accountability.
meta-evaluations include:
usually undertaken for projects that are innovative service delivery models,
take inventory of evaluations to
where the technology and the feasibility of implementation are not well
known in advance
Midterm evaluations are formative in purpose and occur midway
through implementation. Typically, this does not need to be
independent or external, but may be according to specific assessment
needs.
External or independent
inform the selection of future
evaluations are conducted by
evaluations; combine evaluation
evaluator(s) outside of the
results; check compliance with
implementing team, lending it a
evaluation policy and good
degree of objectivity and often
practices; assess how well
technical expertise. These tend to
evaluations are disseminated
focus on accountability.
and utilized for organizational
learning and change, etc.
Ex-post Evaluations
Participatory/Joint Evaluations-
This is the evaluation done just before or after the project is
is a collaborative process where
completed, which includes not only the summative evaluation of the
various stakeholders jointly
project itself (typically in terms of processes and outputs) but also an
evaluate the program and
analysis of the project's impact on its environment and its contribution
to wider (economic/societal/educational/community etc.) goals and
implement necessary
revisions/adjustments.
policies. It should also lay down a framework for future action
leading, in turn, to the next ex ante study. In reality, ex post
evaluations often take so long to produce (in order to measure longterm impact) that they are too late to influence future planning.
Examples of Ex-post evaluation include;
12
Goal Based/Oriented Evaluation- any type of evaluation based on
and knowledge of - and reference to- the goals and objectives of the
program, person, or product. Focus is on goals and objectives.
Measuring how the project/program has done in achieving those goals
and objectives. This approach can be chosen if the major purpose of
evaluation is to measure planned outcomes.
Summative/Final evaluations occur at the end of project/programme
implementation to assess effectiveness and impact. They are
conducted (often externally) at the completion of project/programme
implementation to assess how well the project/programme achieved its
intended objectives.
13
Essential elements of Evaluation/Evaluation Standards
Utility
Utility standards ensure that information needs of evaluation users are satisfied. These standards
address such items as identifying those who will be impacted by the evaluation, the amount and
type of information collected, the values used in interpreting evaluation findings, clarity and
timeliness of evaluation reports. The seven utility standards are:
1. Stakeholder Identification: People who are involved in (or will be affected by) the evaluation
should be identified, so that their needs can be addressed.
2. Evaluator Credibility: The people conducting the evaluation should be both trustworthy and
competent, so that the evaluation will be generally accepted as credible or believable.
3. Information Scope and Selection: Information collected should address pertinent questions
about the program, and it should be responsive to the needs and interests of clients and other
specified stakeholders.
4. Values Identification: The perspectives, procedures, and rationale used to interpret the
findings should be carefully described, so that the bases for judgments about merit and value are
clear.
5. Report Clarity: Evaluation reports should clearly describe the program being evaluated,
including its context, and the purposes, procedures, and findings of the evaluation. This will help
ensure that essential information is provided and easily understood.
6. Report Timeliness and Dissemination: Significant midcourse findings and evaluation reports
should be shared with intended users so that they can be used in a timely fashion.
7. Evaluation Impact: Evaluations should be planned, conducted, and reported in ways that
encourage follow-through by stakeholders, so that the evaluation will be used.
14
Feasibility
Feasibility standards ensure that the evaluation can be practically done. These standards
emphasize that the evaluation should employ practical, nondisruptive procedures; and that the
use of resources in conducting the evaluation should be prudent and produce valuable findings.
The feasibility standards are:
1. Practical Procedures: The evaluation procedures should be practical, to keep disruption of
everyday activities to a minimum while needed information is obtained.
2. Political Viability: The evaluation should be planned and conducted with anticipation of the
different positions or interests of various groups. This should help in obtaining their cooperation
so that possible attempts by these groups to curtail evaluation operations or to misuse the results
can be avoided or counteracted.
3. Cost Effectiveness: The evaluation should be efficient and produce enough valuable
information that the resources used can be justified.
Propriety
Propriety standards ensure that the evaluation is ethical (i.e. conducted with regard for the rights
and interests of those involved and affected). These standards address such items as developing
protocols and other agreements for guiding the evaluation; protecting the welfare of human
subjects; weighing and disclosing findings in a complete and balanced fashion; and addressing
any conflicts of interest in an open and fair manner. The eight propriety standards follow.
1. Service Orientation: Evaluations should be designed to help organizations effectively serve the needs
of all of the targeted participants.
2. Formal Agreements: The responsibilities in an evaluation (what is to be done, how, by whom, when)
should be agreed to in writing, so that those involved are obligated to follow all conditions of the
agreement, or to formally renegotiate it.
15
3. Rights of Human Subjects: Evaluation should be designed and conducted to respect and protect the
rights and welfare of human subjects, that is, all participants in the study.
4. Human Interactions: Evaluators should respect basic human dignity and worth when working with
other people in an evaluation, so that participants don't feel threatened or harmed.
5. Complete and Fair Assessment: The evaluation should be complete and fair in its examination,
recording both strengths and weaknesses of the program being evaluated. This allows strengths to be built
upon and problem areas addressed.
6. Disclosure of Findings: The people working on the evaluation should ensure that all of the evaluation
findings, along with the limitations of the evaluation, are accessible to everyone affected by the
evaluation, and any others with expressed legal rights to receive the results.
7. Conflict of Interest: Conflict of interest should be dealt with openly and honestly, so that it does not
compromise the evaluation processes and results.
8. Fiscal Responsibility: The evaluator's use of resources should reflect sound accountability procedures
and otherwise be prudent and ethically responsible, so that expenditures are accounted for and
appropriate.
Accuracy
Accuracy standards ensure that the evaluation produces findings that are considered correct. The
standards include such items as describing the program and its context; articulating in detail the
purpose and methods of the evaluation; employing systematic procedures to gather valid and
reliable information; applying appropriate qualitative or quantitative methods during analysis and
synthesis; and producing impartial reports containing conclusions that are justified. There are 12
accuracy standards:
1. Program Documentation: The program should be described and documented clearly and
accurately, so that what is being evaluated is clearly identified.
2. Context Analysis: The context in which the program exists should be thoroughly examined so
that likely influences on the program can be identified.
3. Described Purposes and Procedures: The purposes and procedures of the evaluation should
be monitored and described in enough detail that they can be identified and assessed.
16
4. Defensible Information Sources: The sources of information used in a program evaluation
should be described in enough detail that the adequacy of the information can be assessed.
5. Valid Information: The information gathering procedures should be chosen or developed and
then implemented in such a way that they will assure that the interpretation arrived at is valid.
6. Reliable Information: The information gathering procedures should be chosen or developed
and then implemented so that they will assure that the information obtained is sufficiently
reliable.
7. Systematic Information: The information from an evaluation should be systematically
reviewed and any errors found should be corrected.
8. Analysis of Quantitative Information: Quantitative information - data from observations or
surveys - in an evaluation should be appropriately and systematically analyzed so that evaluation
questions are effectively answered.
9. Analysis of Qualitative Information: Qualitative information - descriptive information from
interviews and other sources - in an evaluation should be appropriately and systematically
analyzed so that evaluation questions are effectively answered.
10. Justified Conclusions: The conclusions reached in an evaluation should be explicitly
justified, so that stakeholders can understand their worth.
11. Impartial Reporting: Reporting procedures should guard against the distortion caused by
personal feelings and biases of people involved in the evaluation, so that evaluation reports fairly
reflect the evaluation findings.
12. Meta evaluation: The evaluation itself should be evaluated against these and other pertinent
standards, so that it is appropriately guided and, on completion, stakeholders can closely examine
its strengths and weaknesses.
17
What is the relationship between evaluation and audit?
Audits generally assess the soundness, adequacy and application of systems, procedures and
related internal controls. Audits encompass compliance of resource transactions, analysis of the
operational efficiency and economy with which resources are used and the analysis of the
management of programmes and programme activities.
Like evaluation, audit assesses the effectiveness, efficiency and economy of both programme and
financial management and recommends improvement. However, the objective and focus of audit
differ from that of evaluation.
Unlike evaluation, audit does not establish the relevance or determine the likely impact or
sustainability of programme results. Audit verifies compliance with established rules,
regulations, procedures or mandates of the organization and assesses the adequacy of internal
controls. It also assesses the accuracy and fairness of financial transactions and reports.
Management audits assess the managerial aspects of a units operations.
Notwithstanding this difference in focus, audit and evaluation are both instruments through
which management can obtain a critical assessment of the operations of the organization as a
basis for instituting improvements.
Evaluation = Accountability + Learning
Audit = Accountability
18
Evaluation criteria
Relevance The extent to which the objectives of the development intervention (e.g. project) are
consistent with beneficiarys requirements, country needs, global priorities, partners and donors
polices. It should include an assessment of the quality of project preparation and design i.e. the
logic and completeness of the project planning process, the internal logic and coherence of the
project design.
Efficiency The fact that the project results have been achieved at a reasonable cost, i.e. how
well inputs/means have been converted into activities, in terms of quality, quantity and time, and
quality of the results achieved. This usually requires comparing alternative approaches to
achieving the same results, to see whether the most efficient process has been/was adopted
Effectiveness The extent to which the development interventions objectives were achieved or
are expected to be achieved, taking into account their relative importance
Impact Positive and negative, primary and secondary log-term effects produced by a
development intervention, directly or indirectly, intended or unintended. It also includes the
effect of the project on its wider environment, and its contribution to wider policy and sector
objectives.
Sustainability An assessment of the likelihood of benefits produced by the project to continue
to flow after external funding has ended, and with particular reference to factors of ownership by
beneficiaries, policy support, economic and financial factors, socio-cultural aspects, appropriate
technology, environmental aspects, institutional and management capacity.
19
Steps for conducting project/program of evaluation
Step 1. Identify purpose. Clearly identify the reason for conducting the evaluation. Identify the
scope of the evaluation. Do you want to evaluate the whole program or just a part of it? Which
component? What do you want the evaluation to tell you? Ensure everyone on the team agrees.
Step 2. Review program goals. Closely examine the program or project goals as stated by the
designers of the program or project. What changes did the designers hope to make?
Step 3. Identify evaluation stakeholders. Stakeholders are those who have a stake in the outcome
of the evaluation, not the audience(s) targeted by the program
Step 4. Contact stakeholders. Obtain input about what questions they have on the program,
project, or policy. This can be accomplished through a workshop session where evaluation
questions are brainstormed, or by contacting stakeholders individually.
Step 5. Revisit the purpose of the evaluation. Based on your conversations with stakeholders and
your own reason for initiating the evaluation, rewrite the purpose of the evaluation.
Step 6. Decide if evaluation will be in-house or contracted out. Based on the scope of the
evaluation and the nature of the evaluation questions, decide whether you need to hire a
professional evaluator or if you can conduct the evaluation with existing staff. Develop a budget
based on your decision.
Step 7. Determine data-collection methods. Decide on data-collection procedures to answer your
evaluation questions.
Step 8. Create data-collection instrument. Construct or adapt existing data-collection
instrument(s).
Step 9. Test data-collection instrument. Administer the draft instrument with a group of willing
respondents and ask for their feedback on the instrument. Which questions were not clear? Were
any questions misleading?
Step 10. Collect evaluation data using the data collection methods discussed earlier
Step 11. Summarize and analyze the data. Data can be analyzed using both qualitative and
quantitative methods (e.g. SPSS)
Step 12. Prepare reports for stakeholders.
20
Sources of data for evaluation
Primary data sources include;
Project records (case records, registration records, academic records, and other
information).
Project management information systems.
Project reports and documents.
Project staff.
Project participants
Members of a control or comparison group.
Staff of collaborating agencies.
Community leaders (in the literal sense of community and also community
of practice)
Outside experts.
The general public.
Secondary data sources
Secondary data is basically existing data that has already been collected by someone else. They
include
Local, regional or national databases.
The above exercise will help you to determine the sort and the amount of information that
youll need to collect.
Previous surveys and studies
Journal articles
Published reports
Records from other agencies
21
The five key evaluation questions
The process of developing the answers to the evaluation questions will vary, as each project
varies, but the five fundamental questions remain the same.
5 key evaluation questions
What?
1. Did we do what we said we would do?
Why?
2. What did we learn about what worked and what didn't work and why?
So what?
3. What difference did it make that we did this work?
Now What?
4. What could we do differently?
Then what?
5. How do we plan to use evaluation findings for continuous learning?
22
Role of control and monitoring in evaluation
If your evaluation involves a large team or collection of primary data from a large number of
sources, for example interviewing or surveying all project participants or beneficiaries then you
may want to monitor the data collection process to ensure consistency. Nothing is more
damaging to an evaluation effort than information collection instruments that have been
incorrectly or inconsistently administered, or that are incomplete. There are various activities that
can be undertaken as part of the monitoring process.
(i) Establish a routine and timeframe for submitting completed instruments.
It is a good idea to have instruments submitted to the appropriate member of the evaluation team
immediately after they have been filled in. That person can then review the instruments and
make sure that they are being filled in correctly. This will allow problems to be identified and
resolved immediately. You may need to retrain some members of the staff responsible for data
collection or have a group meeting to re-emphasize a particular procedure or activity.
(ii) Conduct random observations of the data collection process. A member of the evaluation
team may be assigned the responsibility of observing the data collection process at various times
during the evaluation. This person, for example, may sit in on an interview session to make sure
that all of the procedures are being correctly conducted.
(iii) Conduct random checks of respondents. As an additional quality control measure, someone
on the evaluation team may be assigned the responsibility of checking with a sample of
respondents on a routine basis to determine whether the instruments were administered in the
expected manner. This individual may ask respondents if they were given the informed consent
form to sign (if appropriate) and if it was explained to them, where they were interviewed,
whether their questions about the interview were answered, and whether they felt the attitude or
demeanor of the interviewer was appropriate.
(iv) Keep completed interview forms in a secure place. This will ensure that instruments
are not lost and that confidentiality is maintained. Monitor your projects data collection
instruments and ensure they not be left lying about and access to this information should be
limited. You may want to consider number coding the forms rather than using names, though
keeping a secured data base that connects the names to codes is important
23
(v) Encourage staff to view the evaluation as an important part of the project. If project staff are
given the responsibility for data collection, they will need support from you for this activity.
Their first priority usually is providing services or training to participants and collecting
evaluation information may not be valued. You will need to emphasize to your staff that the
evaluation is part of the project and that evaluation information can help them improve the
program implementation. Once evaluation information is collected, you can begin to analyze it.
To maximize the benefits of the evaluation to you, project staff, and project participants, this
process should take place on an ongoing basis or at specified intervals during the evaluation.
An outline for a project evaluation report
Executive summary
Usually not more than five pages the shorter the better intended to provide enough
information for busy people, but also to tease peoples appetite so that they want to read the full
report.
Preface
Not essential, but a good place to thank people and make a broad comment about the process,
findings etc.
Contents page
With page numbers, to help people find their way around the report.
Section 1: Introduction
Usually deals with background to the project/organization, background to the evaluation, the
brief to the evaluation team,
Section 2: Methodology
Evaluation methodologies used in the actual evaluation process and any problems that occurred.
also captured in this section are limitations of the evaluation
Section 3: Findings
Here you would have sections dealing with the important areas of findings, e.g. efficiency,
effectiveness and impact, or the themes that have emerged. The findings should be presented
around the evaluation objectives
24
Section 4: Conclusions:
Here you would draw conclusions from the findings the interpretation, what they mean. It is
quite useful to use a SWOT Analysis.
Section 5: Recommendations:
This would give specific ideas for a way forward in terms of addressing weaknesses and building
on strengths.
Appendices:
Here you would include Terms of Reference, list of people interviewed, questionnaires used,
possibly a map of the area and so on.
25