To examine the process of community-campus engagement in an initiative developed to build evaluation capacities of community-based organizations (CBOs).
Evaluability assessment, capacity-building, self administered surveys and semi-structured interviews were conducted from 2004 to 2007 and analyzed through transcript assessment and SPSS to identify trends, relationships and capacity changes over time.
Evaluability assessment identified CBO strengths in program planning and implementation and challenges in measurable objective development, systematic use of mixed methods, data management and analysis. Evaluability assessment informed evaluation capacity-building (ECB) trainings, teleconferences and webinars that resulted in statistically significant improvements in evaluation knowledge, skills, and abilities. Post-initiative interviews indicated CBO preferences for face-to-face training in logic model development, mixed method data collection and analysis.
This report illustrates the use of mixed methods to plan, implement and evaluate a model to catalyze CBOs systematic assessment of prevention initiatives and considerations in evaluation capacity-building.
The HIV/AIDS epidemic continues to adversely affect communities throughout the United States, particularly communities in the Southern region. While the estimated number of new AIDS cases in the nation and the South remained stable between 2005 and 2008, the Southern region continued to have the highest estimated number of new AIDS cases of any U.S. region and the highest estimated number of people living with AIDS during this period.
While community-based organizations (CBOs) serve as catalysts for HIV/AIDS prevention and treatment activities, many do not consistently evaluate their programs, limiting the degree to which the effectiveness of interventions can be measured.
Evaluation capacity is characterized by the degree of evaluation skills through recognition of the utility of monitoring and assessment through an organizational culture that incorporates it into program design and implementation efforts.
Factors gauging the degree of consistent CBO evaluation practice are associated with other well-recognized challenges to program delivery including limited time, funding and staff.
Evaluability assessment (EA) is a critical evaluation planning method designed to support the feasibility of program assessment prior to program implementation. This is particularly valuable among CBOs, who operate in fluid internal and community contexts. Over the past twenty years, EA has been used to identify the degree to which a program is ready for evaluation as well as for establishing of goals and objectives based on stakeholder input and consensus.
Evaluability assessment and evaluation capacity are distinguishable in that the former should precede and determine the need for the latter. Evaluability assessment is at the center of developing a formative inventory for targeted evaluation capacity-building. In the frequently resource-challenged environments of CBOs, identification of the plan and system in place for evaluation of a program is critical to determining the evaluation capacity needs of the intervention team.
The Pfizer Foundation funded CBOs between 2004 and 2007 (Pfizer Grantees) conducting HIV primary and secondary prevention activities in the states of Alabama, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, and Texas. Community-based organizations were small to mid-size organizations with as few as two full-time paid staff and as many as 40 unpaid volunteer staff. All annual budgets were less than $1 million. Twenty-four CBOs were initially funded. Twenty-three re-applied and were successfully re-funded in 2005; 20 received continuation funding in 2006. Organizations had established partnerships with health facilities, academic institutions and faith-based organizations. Most intervened with specific racial/ethnic groups while maintaining services or interventions for other groups. The special populations that were served included peer educators, youth attending community organizations, middle and high school students, those in substance abuse recovery, counseling and recovery.
A three-partner collaborative was then developed to identify, prioritize and respond to evaluation capacity needs of community-based organizations (CBOs) conducting HIV primary and secondary prevention. First, The Morehouse School of Medicine Prevention Research Center (MSM PRC) Evaluation brought a participatory evaluation approach, expertise in qualitative and quantitative evaluation and cultural competence in community-based research to the partnership. Second, The Pfizer Foundation Southern HIV/AIDS Prevention Initiative (Pfizer Initiative) represented an organizational will and fiscal capacity to support CBOs conducting HIV prevention in the South. Third, Pfizer Initiative-funded CBOs (Pfizer Grantees), supported by funds up to $55,000, were central partners, bringing innovative approaches to sexual health promotion and HIV prevention, and a direct link to communities and stakeholders. The purpose of this article is to report the process of community-campus engagement in an initiative developed to understand and build evaluation capacities of promising community-based organizations.
Each step in the evaluability assessment was systematically staged to build upon each other in preparation for evaluation capacity-building through understanding of each CBO’s program and evaluation context. Employing a one-size-does-
As part of its overall evaluation, MSM PRC developed and conducted an initial cross-site program assessment survey (C-PAS) of the CBOs to determine their knowledge, skills, and abilities for planning and conducting community interventions and their technical needs.
The C-PAS questionnaire was designed to be completed in 15–20 minutes and to capture the key personnel’s self-reported knowledge and skills relating to key steps in developing, implementing, and evaluating a community-based intervention as well as the organization’s specific abilities to perform essential functions. We used knowledge to refer to the individual understanding of queried activities and skills to represent individual proficiency to perform certain activities. Knowledge and skills were measured on a five-point Likert scale using the six key variable constructs of community program planning and implementation: problem identification, needs assessment, developing goals and objectives, gathering program input and feedback, prevention implementation planning and evaluation of community intervention. Technical assistance needs were assessed (as a yes/no variable) for the following: logic model development, data collection tool development, data management, protocol development, qualitative and quantitative methods, and evaluation.
The C-PAS was initially administered February 2005 and at two subsequent time points to determine changes in knowledge, skills, abilities, and technical needs prior to each capacity-building effort during the Initiative. Self-administered questionnaires were sent to each organizational designee by U.S. and electronic mail to gain varying perspectives on program processes. Pfizer Grantees received two U.S. mail reminders and one telephone reminder to return surveys. The survey response rate was 84.8% for the first C-PAS, 78.3% for the second C-PAS and 93.2% for the third C-PAS. Differences in responses among executive directors and primary program staff people were combined due to non-significant differences between groups.
Following completion of all evaluation-capacity-building activities, MSM PRC conducted one-on-one semi-structured teleconferences from September to December 2006 with each Pfizer Grantee to gather feedback on the evaluation capacity-building partnership throughout the Initiative and continued challenges and technical assistance to be overcome as they worked to implement new competencies after the Initiative’s conclusion. Teleconferences were conducted with each Pfizer Grantee organization between October and December 2006. Twenty Pfizer Grantees participated in the teleconference, resulting in a 91% participation rate. Each teleconference was taped with prior permission of participants and transcribed. Qualitative data analysis was conducted by two reviewers, who independently identified trends and themes in responses for teleconferences.
Evaluability assessment helped to identify areas for targeted evaluation capacity-building among Pfizer Grantees. Challenges identified during this first step were the evidence-base for the development of evaluation capacity-building activities for the duration of Pfizer Initiative. Each area described below represents identified challenges.
Comprehensive C-PAS results have been detailed elsewhere and are briefly highlighted here to indicate key successes in implementation and to provide context associated with the implementation of community-engaged model and its associated results.
Pfizer Grantee organizations were asked to describe the aspect(s) of evaluation capacity-building activities provided by the MSM PRC as a means through which the MSM PRC could identify the strengths and challenges of its ECB model. The training format preferred by most Pfizer Grantees was face-to-face conferences (cited 14 times). Hands-on application and networking were valued characteristics of the face-to-face format. The quotations below represent the perspectives of Grantees on the process.
Okay, definitely the face-to-face are much, much, much more effective …. They just are. I don’t think you can really make up for being together in the same room sharing. It’s just so much more education that way. I know it’s much more expensive to do things that way but it really is the thing that works the best. …I’d say the hands-on application …because we had opportunities to participate in several group-related activities to apply some of the information that we’ve learned just with each other but also knowing how to bring back once we got back to our individual sites. …Cause I think you get more and plus you’re focused then. On teleconferences no matter how much you’re trying to not have other stuff around you it’s hard to keep that away.
Organizations identified data entry training, logic model training, and qualitative training, respectively, as the most valuable preferred content areas offered by MSM PRC in evaluation capacity-building activities.
Epi Info …that was that thing that we’re going to be using more than anything else because what we needed was to have a way of collecting data that was simple and efficient and that anybody could utilize and anyone can use. The other thing is the logic model of the subjects that we learned, that was the one that is most effective and that we needed to know more than anything else because that’s something that we can use for other grants and other programs and it helps us set up our programs in a logical fashion that shows effective change and then the other thing is that the last conference we were talking about data collection and things like that at the conference. Well, the logic models was the best thing that happened and the fact that it was insisted and stressed so much at every meeting and every encounter that we had with [MSM PRC] I think it was great. Plus it’s easy for us [not] to develop a logic model for other programs that we had or programs that we wanted to implement.
Pfizer Grantees were asked if there were areas specific to their organizations that continue to be program evaluation challenges. Most discussed program evaluation challenges related to data collection, entry, and analysis.
All that [data] is handwritten now and how we could capture that on a form and then it in a system that we can sort of look at is our challenge. Yeah I would say the data input. We’re just very small. We don’t have anyone. It is partly our fault too but it’s an area that we constantly face [as] a challenge. …Looking at having a consistent model to track our staff so they have better understanding—our entire staff-so they have a better understanding of data collection and analysis and then have regular like quarterly opportunities to review data, to analyze it and to reflect on it as a staff.
Organizational and staff resources were all cited challenges faced by Pfizer Grantees when considering the priority of evaluation. Many expressed program evaluation challenges surrounding staff and resources and staff involvement and buy-in.
Well there is a cap, there is a 7% cap on the amount of money, federal money, that you can use in your agency for administration. So if you can imagine you have a $100,000 grant but you can only use 7% of it for any of your infrastructure, any of your data collection and all of that doesn’t count, it makes it very, very difficult because they don’t want you spending their money on [evaluation] …many of our agencies run with this Catch-22. …A couple of us [are] no longer here and I have myself but there’s only a couple of people with the organization that understand what evaluation is, what its value is, why we do it and the fact of the matters is if the staff doesn’t understand and appreciate evaluation …. So I think that talking about evaluation with our whole staff is important because they’re the ones that are going to input the data. They’re the ones that are going to be collecting this. I think our biggest challenge is time or being understaffed here and it’s hard to really allocate the time to do this because it is important and it’s hard to take the time and do it when you have so much else to do ….
The mixed-method approach employed by the MSM PRC provided an evidence based model by which to assess both the processes and outcome of community-engaged evaluation capacity-building. The C-PAS indicated significant improvement, as previously noted, among Pfizer Grantees to plan, conduct, and evaluate interventions and significantly decreased technical assistance needs directly before evaluation capacity-building events began, with follow-up assessment after one year, and at the conclusion of evaluation capacity-building activities. Following the conclusion of evaluation capacity-building activities, 2006 Pfizer Grantee semi-structured interviews were a critical process through which the MSM PRC could conclude evaluation capacity-building activities with perspectives and perceptions of the Pfizer Grantees, the target audience for whom the Initiative was developed. Organizations considering similar initiatives should consider the follow recommendations detailed below when planning their own support capacity-building work among similar CBOs.
Assessment should be on-going in order to increase the likelihood of local evaluation buy-in and the diffusion of skills, knowledge and abilities within organizations. In addition to limited resources (human, fiscal and time), many organizations experience high staff turnover at the administrative and support levels. Participants in capacity-building activities during the first year of a project may no longer be involved in projects by the funding cycle’s conclusion. This may affect the degree of improvement in evaluation capacity. On-going assessment will also help in the design of training activities that can reasonably accommodate the format and content preferences of participants.
Pfizer Grantees preferred methods that would be most relevant to their current programmatic needs, rather than those that were abstract or perceived as “academic.” Sensitivity to the balance between theoretical relevance of reliable and valid evaluation measures must be coupled with a respect for the resources and value-added from the perspective of the target audiences. Some organizations reported the value of electronic tools and templates that could also be conveniently used at their local organizations such that they would not, as they said “have to reinvent the wheel.”
Knowledge, ability and skill acquisition are often challenged by the contexts within which programs are implemented. While quantitative (e.g., survey) data provide a perspective that is frequently said to be more objective, qualitative data collected from the target audiences, through teleconferences, focus groups and open-ended questions help to shed light on what the data means. Both methods can be used together to develop the most effective strategies which are tailored and informed by the unique needs of recipients.
While Pfizer Grantees have made important strides in evaluation capacity, it has been incremental and operationalized within the contexts of staff turnover, the competing demands of stakeholders, and among other priorities. Organizations have stressed the importance of offering program-specific support in areas including data entry, management and analysis. Programs may alternatively be offered resources through which they can access free or low cost trainings or on-line modules that can be shared with other program staff in order to institutionalize evaluation skills within their organizations.
Evaluability assessment, capacity-building activities and qualitative assessment show that evaluation capacity-building is an iterative process, requiring partnerships between funders, evaluators and grantees over time to fine-tune strategies in an audience-targeted manner because one size does not fit all. Current trends in community-based participatory research and evaluation encourage the empowerment of community organizations to understand, plan, and conduct local evaluation efforts. The process toward these practices necessitates the acquisition of practical skills that are valued within each organization and become part of all programmatic activities. The benefits of evaluation skill development and buy-in include the development of measurable indicators, replicable implementation and evaluation plans, and dissemination of systematic, data-driven results, greatly influencing a program’s likelihood of sustainability. While processes, outcomes and recommendations described in this article represent a community-campus partnership to strengthen small to mid-sized community-based organizations (organizations with annual revenues of less than $1 million) conducting HIV/AIDS prevention in the South, this model may be adapted to similar partnerships with organizations that are addressing other chronic health disparities. Stated differently, strategic community-based participatory approaches, including evaluability assessment and evidence-based capacity-building initiatives are relevant approaches to consider to increase the likelihood of ownership and buying in evaluation partnerships with CBOs.
The MSM PRC seeks to identify the degree to which programs are achieving developed goals and objectives and to enhance the skills critical to planning, implementing, and evaluating programs for increased program effectiveness and sustainability. This report represents evaluation measures, interactions and processes that were integral to increased evaluation capacity, as measured by results of the Cross-site Program Assessment Survey (see
This project was made possible through funding from the Pfizer Foundation and The Morehouse School of Medicine Prevention Research Center CDC grant #1U48DP001907-01
Evaluability assessment and evaluation capacity-building steps.
CBO = Community based organizations
MSM-PRC = Morehouse School of Medicine—Prevention Research Center
CBO EVALUABILITY ASSESSMENT RESULTS
| Evaluability Challenge | Opportunity for Capacity-building |
|---|---|
| Development of Measurable Change Objectives | While there was a clear articulation of outputs, or the products of each strategy or activity (i.e., the number of peer educators trained, condoms distributed). Very few identified measurable objectives to gauge short- or mid-term change resulting from these efforts. Outcome-based evaluation requires systematic collection of data that will capture desired change, beyond the documenting of numbers of activities and related products. |
| Selection/Revision of Methods and Tools to Measure Progress and Desired Outcomes | Several Pfizer Grantees expressed the desire to revise or amend existing data collection tools to assess not only HIV knowledge but risk perceptions or behavioral intentions among prioritized target populations. Further, few had used both qualitative and quantitative data to measure processes and outcomes |
| Data Collection, Storage and Analysis | Limited evaluation accountability in previously funded programs, limited time and staff to spend on systematic data collection or entry were among reasons cited for little or no systematic data management systems. Further, most did not have an electronic database for survey data collected. These factors limited the degree to which data quality checks, and subsequent data analysis could be conducted to assess interventions. |
CROSS-SITE PROGRAM ASSESSMENT SURVEY (C-PAS) SAMPLE QUESTIONS
| Question | Answer Choices |
|---|---|
| Please rate your organization’s ability to:
Develop data collection tools Conduct focus groups Enter collected data into the computer analyze collected data | On a scale of 1 (Low) to 5 (High) |
| Please indicate your current level of skill (knowledge or ability, respectively) related to each of the following steps in community program development:
Development of goals and objectives Data Management Qualitative Methods | 1 = None, 2 = Little, 3 = Some, 4 = A lot, 5 = Extensive |
| Please indicate all technical assistance needs:
Logic model development to help chart a clearly visible path or program planning Data Management Data Collection Tool Development Protocol Development to increase consistent data collection, management and analysis Qualitative/Quantitative methods to learn the best ways to measure expected changes | Yes or No |
C-PAS = Cross-site program assessment survey
CBO EVALUATION CAPACITY-BUILDING AND MEASUREMENT SCHEDULE 2004–2006
| Activity | Description |
|---|---|
| Pfizer Foundation Southern HIV/AIDS Prevention Initiative Orientation to Evaluation, June 2, 2004 | An introduction to evaluation philosophy and methods and procedures of community intervention planning, implementation, and evaluation. |
| 1st cross-site program assessment survey (C-PAS) conducted: February–March 2005 | |
| Cross-site Evaluation Teleconferences and Site Visits, Fall 2004 | Conducted teleconference with each grantee to gain better insights into the background of the programs, current status of intervention(s) development, and future plans. Subsequent site visits to each organization were also conducted by a member of the evaluation team and a member from Pfizer to gain additional insights of organizations environments, HIV/AIDS and other programs. |
| Training Workshop, May 11–12, 2005 | Conducted in response to identified evaluation challenges and capacity-building needs during project year 2004, including data collection methods, tools, and analysis; database development; and logic model review. |
| Training Teleconferences, April–August 2005 | A total of 12 teleconferences were designed and conducted to facilitate evaluation capacity-building opportunities through (a) reinforcement of intervention skills attained, (b) provision of an outlet for evaluation resource-sharing, and (c) discussion of real-time evaluation case studies that may be applied to individual program activities. |
| 2nd C-PAS conducted: February–March 2006 | |
| Training Conference, June 14–16, 2006 | Provided in-depth training in areas identified through evaluation of C-PAS findings, capacity-building activities, and feedback from grantees, including developing survey questions and focus group guides; qualitative and quantitative data entry, management, and analysis; and use of collected data. |
| Training Web Conferences, March and August, 2006 | Two capacity-building training web conferences were developed and facilitated Year 3, designed to prepare grantees for sustained programmatic and evaluation activities beyond the last year of the initiative. Each web conference was offered twice, on separate days, to allow for small group interaction among 9–11 grantees, and to accommodate scheduling needs. |
| Program Assessment Teleconferences, August 9–10, 2006 | One-on-one, semi-structured teleconferences were conducted to gain better insight into grantees’ technical assistance needs for completion of the 2006 Program Assessment and continued evaluation challenges that remain at the conclusion of the initiative. |
| 3rd C-PAS conducted: September–October 2006 |
CBO = Community based organizations
C-PAS = Cross-site program assessment survey
CROSS-SITE PROGRA M ASSESSMENT SURVEY (C-PAS) SKILL, KNOWLEDGE AND ABILITY SUMMARY SCORES
| C-PAS | Skill | Knowledge | Ability |
|---|---|---|---|
| 1st (n=39) | 2.83 (0.67) | 2.83 (1.33) | 3.88 ± 0.42 |
| 2nd (n=36) | 3.00 (0.75) | 3.00 (0.66) | 4.07 ± 0.45 |
| 3rd (n=41) | 3.16 (0.83) | 3.17 (0.34) | 4.21 ± 0.46 |
Skills and knowledge based on a five-point Likert scale of 0 (none) to 4 (extensive); summary score of six items.
Abilities based on a five-point Likert scale of 1 (low) to 5 (high);summary score of 20 items.
Overall trend in higher summary skills scores is not statistically significant, p=.057.
Overall trend in higher summary knowledge scores is statistically significant, p=.022.
Overall trend in higher summary abilities scores is statistically significant, p=.0004.
IQR = Interquartile Range
SD = Standard Deviation