A program’s infrastructure is often cited as critical to public health success. The Component Model of Infrastructure (CMI) identifies evaluation as essential under the core component of engaged data. An evaluation plan is a written document that describes how to monitor and evaluate a program, as well as how to use evaluation results for program improvement and decision making. The evaluation plan clarifies how to describe what the program did, how it worked, and why outcomes matter. We use the Centers for Disease Control and Prevention’s (CDC) “Framework for Program Evaluation in Public Health” as a guide for developing an evaluation plan. Just as using a roadmap facilitates progress on a long journey, a well-written evaluation plan can clarify the direction your evaluation takes and facilitate achievement of the evaluation’s objectives.
A program’s infrastructure is often cited as a critical component of public health success.
The Office on Smoking and Health at the Centers for Disease Control and Prevention (OSH/CDC) has a long history of supporting evaluation and evaluation capacity building as a central component of state tobacco control program infrastructure.
Area 4 of the 7 Certified Health Education Specialists’ responsibilities
An evaluation plan is a written document that describes how you will monitor and evaluate your program, as well as how you intend to use evaluation results for program improvement and decision making. The evaluation plan clarifies how you will describe the “what,” the “how,” and the “why it matters” for your program.
The “what” describes your program and how its activities are linked to its intended effects. It serves to clarify the program’s purpose and anticipated outcomes.
The “how” addresses the process for implementing a program and provides information about whether the program is operating with fidelity to the program’s design.
The “why it matters” provides the rationale for your program and its intended impact on public health. This is also sometimes referred to as the “so what?” question. Being able to demonstrate that your program has made a difference is critical to program sustainability.
An evaluation plan is similar to a roadmap. It clarifies the steps needed to assess the processes and outcomes of a program. An effective evaluation plan is more than a list of indicators in your program’s work plan. It is a dynamic tool that should be updated on an ongoing basis to reflect program changes and priorities over time.
Just as using a roadmap facilitates progress on a long journey, an evaluation plan can clarify the direction of your evaluation based on the program’s priorities and resources and the time and skills needed to accomplish the evaluation. The process of developing a written evaluation plan in cooperation with an evaluation stakeholder workgroup (ESW) will foster collaboration; give a sense of shared purpose to the stakeholders; create transparency through the implementation process; and ensure that stakeholders have a common vision and understanding of the purpose, use, and users of the evaluation results. The use of evaluation results must be planned, directed, and intentional and should be included as part of the evaluation plan.
There are numerous ways in which you can frame your evaluation plan. We use the CDC’s “Framework for Program Evaluation in Public Health” as a guide for the planning process and outlining considerations for what to include in the written evaluation plan.
A primary feature of an evaluation plan is the identification and acknowledgement of the roles and responsibilities of an ESW. The ESW includes members who have a stake or vested interest in the evaluation findings and those who are the intended users of the evaluation.
For the ESW to be truly integrated in the development of the evaluation plan, ideally, it will be identified in the evaluation plan. The form this takes may vary based on program needs. If it is important politically, a program might want to specifically name each member of the workgroup, their affiliation, and specific role(s) in the workgroup. Being transparent about the role and purpose of the ESW can facilitate buy-in for the evaluation plan. In addition, you may want to include the preferred method of communication and the timing of that communication for each stakeholder or group. A stakeholder chart or table can be a useful tool to include in your evaluation plan.
The next step in the evaluation plan is to describe the program. A program description clarifies the program’s purpose, stage of development, activities, capacity to improve health, and implementation context. A shared understanding of the program by health educators, program staff, evaluators, and the ESW and what the evaluation can and cannot deliver is essential to implementation of evaluation activities and use of evaluation results. A narrative description in the written plan is helpful to ensure a full and complete shared understanding of the program and a ready reference for stakeholders. A logic model may be used to succinctly synthesize the main elements of a program. The program description is essential for focusing the evaluation design and selecting the appropriate methods. Too often groups jump to evaluation methods before understanding what the program is designed to achieve or what the evaluation should deliver. The description will be based on your program’s objectives and context but most descriptions include at a minimum:
A statement of need to identify the health issue addressed
Inputs or program resources needed to implement program activities
Program activities linked to program outcomes through theory or best practice program logic
Stage of development of the program to reflect program maturity
Environmental context within which the program is implemented
In terms of describing the stage of development of the program, the developmental stages that programs typically move through are planning, implementation, and maintenance. For policy or environmental initiatives, which programs and health educators often evaluate, the stages might look somewhat like this:
Environment and asset assessment
Policy or environmental change development
Policy or environmental change developed but not yet approved
Policy or environmental change approved but not implemented
Policy or environmental change in effect for less than 1 year
Policy or environmental change in effect for 1 year or longer
When it comes to evaluation, the stages of development are not always a “once-and-done” sequence of events. For example, once a program has progressed past the initial planning stage, it may experience occasions where environment and asset assessment are still needed. Additionally, in a multiyear program, the evaluation plan should consider both future evaluation data sets and baseline information that will be needed so that the evaluators can be prepared for more distal impact and outcome projects.
In this part of the plan, you will articulate the purposes of the evaluation, its uses, and the program description. This will aid in narrowing the evaluation questions and focusing the evaluation for program improvement and decision making. The scope and depth of any program evaluation is dependent on program and stakeholder priorities and the feasibility of conducting the evaluation given the available resources. The program staff should work together with the ESW to determine the priority and feasibility of the evaluation’s questions and identify the uses of evaluation results before designing the evaluation plan. In this step, you may begin to notice the iterative process of developing the evaluation plan as you revisit aspects of step 1 and step 2 to inform decisions made in step 3.
Even with an established multiyear plan, step 3 should be revisited with your ESW annually (or more often if needed) to determine whether priorities and feasibility issues still hold for the planned evaluation activities. This highlights the dynamic nature of the evaluation plan. Ideally, your plan should be intentional and strategic by design and generally cover multiple years for planning purposes, but the plan is not set in stone. It should also be flexible and adaptive. It is flexible because resources and priorities change and adaptive because opportunities and programs change. For example, you may have a new funding opportunity and a short-term program added to your overall program. The written plan can document where you have been and where you are going with the evaluation as well as why changes were made to the plan.
Discussion of the budget and the resources (financial and human) that can be allocated to the evaluation will be included in your feasibility discussion. In the
Now that you have solidified the focus of your evaluation and identified the questions to be answered, it will be necessary to select and document the appropriate methods that fit the evaluation questions you have chosen. Sometimes evaluation is guided by a favorite method and the evaluation is forced to fit that method. This could lead to incomplete or inaccurate answers to evaluation questions. Ideally, the evaluation questions inform the methods. If you follow the steps in this outline, you will collaboratively choose the evaluation questions with your ESW that will provide you with information that will be used for program improvement and decision making. The most appropriate method to answer the evaluation questions should then be selected and the process you used to select the methods should be described in your plan. Additionally, it is prudent as part of the articulation of the methods to identify a timeline and the roles and responsibilities of those overseeing the evaluation implementation, whether it is program or stakeholder staff.
To accomplish this step of choosing appropriate methods to answer your evaluation questions, you will need to:
keep in mind the purpose, logic model/program description, stage of development of the program, evaluation questions, and what the evaluation can and cannot deliver.
determine the method(s) needed to fit the question(s). There are a multitude of options including, but not limited to, qualitative, quantitative, mixed methods, multiple methods, naturalistic inquiry, experimental, and quasi-experimental.
think about what will constitute credible evidence for stakeholders or users.
identify sources of evidence (e.g., persons, documents, observations, administrative databases, surveillance systems) and appropriate methods for obtaining quality (i.e., reliable and valid) data.
identify roles and responsibilities along with timelines to ensure the project remains on time and on track.
remain flexible and adaptive and, as always, transparent.
One tool that is particularly useful in your evaluation plan is an evaluation plan methods grid (
Justifying conclusions includes analyzing the information you collected, interpreting, and drawing conclusions from your data. This step is needed to turn the data collected into meaningful, useful, and accessible information. It is critical to think through this process and outline procedures to be implemented and the necessary timeline in the evaluation plan. Programs often incorrectly assume that they no longer need the ESW integrally involved in decision making around formulating conclusions and instead look to the “experts” to complete the analyses and interpretation of the program’s data. However, engaging the ESW in this step is critical to ensuring the meaningfulness, credibility, and acceptance of evaluation findings and conclusions. Actively meeting with stakeholders and discussing preliminary findings helps to guide the interpretation phase. In fact, stakeholders often have novel insights or perspectives to guide interpretation that evaluation staff may not have, leading to more thoughtful conclusions.
Planning for analysis and interpretation is directly tied to the timetable begun in step 4. Errors or omissions in planning this step can create serious delays in producing the final evaluation report and may result in missed opportunities (e.g., having current data available for a legislative session) if the report has been timed to correspond with significant events (e.g., program or national conferences).
Moreover, it is critical that your evaluation plan includes time for interpretation and review of the conclusions by stakeholders to increase transparency and validity of your process and conclusions. The emphasis here is on justifying conclusions, not just analyzing data. This is a step that deserves due diligence in the planning process. A note of caution: As part of a stakeholder-driven process, there is often pressure for data interpretation to reach beyond the evidence when conclusions are drawn. It is the responsibility of the evaluator and the ESW to ensure that conclusions are drawn directly from the evidence. This is a topic that should be discussed with the ESW in the planning stages along with reliability and validity issues and possible sources of biases. If possible and appropriate, triangulation of data should be considered and remedies to threats to the credibility of the data should be addressed as early as possible.
Another often overlooked step in the planning stage is step 6, which encompasses planning for use of evaluation results, sharing of lessons learned, communication, and dissemination of results.
Based on the uses for your evaluation, you will need to determine who should learn about the findings and how they should learn the information. Typically this is where the final report is published. The impact and value of the evaluation results will increase if the program and the ESW take personal responsibility for getting the results to the right people and in a usable, targeted format.
An intentional communication and dissemination approach should be included in your evaluation plan. As previously stated, the planning stage is the time for the program and the ESW to begin to think about the best way to share the lessons you will learn from the evaluation. The communication–dissemination phase of the evaluation is a 2-way process designed to support use of the evaluation results for program improvement and decision making. In order to achieve this outcome, a program must translate evaluation results into practical applications and must systematically distribute the information through a variety of audience-specific strategies.
The first step in writing an effective communications plan is to define your communication goals and objectives. Given that the communication objectives will be tailored to each priority audience, you need to consider with your ESW who the primary audience(s) are (e.g., the ESW, the funding agency, the general public, and some other groups).
Once the goals, objectives, and priority audiences of the communication plan are established, you should consider the best ways to reach the intended audiences by considering which communication–dissemination methods or formats will best serve your goals and objectives. Will the program use newsletters/fact sheets, oral presentations, visual displays, videos, storytelling, and/or press releases? Carefully consider the best tools to use by getting feedback from your ESW, by learning from others’ experiences, and by reaching out to priority audiences to gather their preferences. An excellent resource to facilitate creative techniques for reporting evaluation results is Torres et al.’s
Complete the communication planning step by establishing a timetable for sharing evaluation findings and lessons learned. The communication and dissemination chart provided in
It is important to note that you do not have to wait until the final evaluation report is written in order to share your evaluation results. A system for sharing interim results to facilitate program course corrections and decision making should be included in your evaluation plan.
Communicating results is not enough to ensure the use of evaluation results and lessons learned. The evaluation team and program staff need to proactively take action to encourage the use and widespread dissemination of the information gleaned through the evaluation project. It is helpful to strategize with stakeholders early in the evaluation process about how your program will ensure that findings are used to support program improvement efforts and informed decision making. Program staff and the ESW must take personal responsibility for ensuring the dissemination of and application of evaluation results.
The impact of the evaluation results can reach far beyond the evaluation report. If stakeholders are involved throughout the process, communication and participation may be enhanced. If an effective feedback loop is in place, program improvement and outcomes may be enhanced. If a strong commitment to sharing lessons learned and success stories is in place, then other programs may benefit from the information gleaned through the evaluation process. Changes in thinking, understanding, program and organization may stem from thoughtful evaluative processes.
Thank you to the authors of the workbook: S Rene Lavinghouze, Jan Jernigan, LaTisha Marshall, Adriane Niare, Kim Snyder, Marti Engstrom, Rosanne Farris.
The findings and conclusions in this presentation are those of the author and do not necessarily represent the views of the Centers for Disease Control and Prevention.
The CDC framework for program evaluation (color figure available online).
Example Evaluation Plan Methods Grid
| Evaluation Question | Indicator/Performance Measure | Method | Data Source | Frequency | Responsibility |
|---|---|---|---|---|---|
Example Communication and Dissemination Chart
| Target Audience (Priority) | Objectives for the Communication | Tools | Timetable |
|---|---|---|---|