91038001130J Health Care Poor UnderservedJ Health Care Poor UnderservedJournal of health care for the poor and underserved1049-20891548-686922643553391613710.1353/hpu.2012.0077NIHMS458094ArticleUsing Evaluability Assessment and Evaluation Capacity-building to Strengthen Community-based Prevention InitiativesAkintobiTabia HenryPhD, MPHYanceyEllen M.PhDDanielsPamelaPhD, MPH, MBAMayberryRobert M.PhD, MPH, MSJacobsDeBranMPHBerryJamillahPhD, MSWDepartment of Community Health and Preventive Medicine at Morehouse School of Medicine.272013520120622014232 03348© Meharry Medical CollegeObjectives

To examine the process of community-campus engagement in an initiative developed to build evaluation capacities of community-based organizations (CBOs).

Methods

Evaluability assessment, capacity-building, self administered surveys and semi-structured interviews were conducted from 2004 to 2007 and analyzed through transcript assessment and SPSS to identify trends, relationships and capacity changes over time.

Results

Evaluability assessment identified CBO strengths in program planning and implementation and challenges in measurable objective development, systematic use of mixed methods, data management and analysis. Evaluability assessment informed evaluation capacity-building (ECB) trainings, teleconferences and webinars that resulted in statistically significant improvements in evaluation knowledge, skills, and abilities. Post-initiative interviews indicated CBO preferences for face-to-face training in logic model development, mixed method data collection and analysis.

Conclusion

This report illustrates the use of mixed methods to plan, implement and evaluate a model to catalyze CBOs systematic assessment of prevention initiatives and considerations in evaluation capacity-building.

Preventioncommunity-based organizationsprogram assessmentcapacity-buildingNational Center on Minority Health and Health Disparities : NIMHDP20 MD006881 || MD

The HIV/AIDS epidemic continues to adversely affect communities throughout the United States, particularly communities in the Southern region. While the estimated number of new AIDS cases in the nation and the South remained stable between 2005 and 2008, the Southern region continued to have the highest estimated number of new AIDS cases of any U.S. region and the highest estimated number of people living with AIDS during this period.1 Florida, Georgia, and Louisiana were among the states with the highest rates of people living with a diagnosis of HIV infection between 2006 and 2009.2

While community-based organizations (CBOs) serve as catalysts for HIV/AIDS prevention and treatment activities, many do not consistently evaluate their programs, limiting the degree to which the effectiveness of interventions can be measured.38 Increased accountability by funders and the public health emphasis on evidence-based interventions have heightened the importance of evaluation skills among CBOs.811 As a result, the sustainability of promising programs is increasingly contingent on evidence of programmatic successes.

Evaluation capacity is characterized by the degree of evaluation skills through recognition of the utility of monitoring and assessment through an organizational culture that incorporates it into program design and implementation efforts.1221 Evaluation capacity serves to strengthen a program’s ability to consistently measure success, make programmatic improvements, and provide funders with evidence of good fiscal stewardship. The development of measurable indicators, replicable implementation plans, and systematic, data-driven programs are among the benefits of evaluation skill development.

Factors gauging the degree of consistent CBO evaluation practice are associated with other well-recognized challenges to program delivery including limited time, funding and staff.2224 Other obstacles to evaluation practice include perceptions of a disconnect between service delivery and evaluation, the fear of negative interpretation of evaluation results, and lack of confidence in the accuracy of evaluation findings.24 Data collection and analysis skills, technical assistance needs, and access to technology are also cited as barriers to evaluation practice.23 Building evaluation capacity is more effective when preceded by a review of a program’s infrastructure, implementation model, and resources (human and fiscal) for monitoring progress. Evaluability assessment is at the center of developing a formative inventory for targeted evaluation capacity-building.

Evaluability assessment (EA) is a critical evaluation planning method designed to support the feasibility of program assessment prior to program implementation. This is particularly valuable among CBOs, who operate in fluid internal and community contexts. Over the past twenty years, EA has been used to identify the degree to which a program is ready for evaluation as well as for establishing of goals and objectives based on stakeholder input and consensus.25 Evaluability assessment processes include face to face interviews, meetings or site visits to understand program contexts, and the review of program documents to determine how curriculum content and other programmatic components are evaluated in real time.2528 Frequently at the center of EA outcomes is the drafting, review, and establishment of a program logic model and working with program staff people to identify the resources and support to implement and evaluate it.27

Evaluability assessment and evaluation capacity are distinguishable in that the former should precede and determine the need for the latter. Evaluability assessment is at the center of developing a formative inventory for targeted evaluation capacity-building. In the frequently resource-challenged environments of CBOs, identification of the plan and system in place for evaluation of a program is critical to determining the evaluation capacity needs of the intervention team.

The Pfizer Foundation funded CBOs between 2004 and 2007 (Pfizer Grantees) conducting HIV primary and secondary prevention activities in the states of Alabama, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, and Texas. Community-based organizations were small to mid-size organizations with as few as two full-time paid staff and as many as 40 unpaid volunteer staff. All annual budgets were less than $1 million. Twenty-four CBOs were initially funded. Twenty-three re-applied and were successfully re-funded in 2005; 20 received continuation funding in 2006. Organizations had established partnerships with health facilities, academic institutions and faith-based organizations. Most intervened with specific racial/ethnic groups while maintaining services or interventions for other groups. The special populations that were served included peer educators, youth attending community organizations, middle and high school students, those in substance abuse recovery, counseling and recovery.28

Current study

A three-partner collaborative was then developed to identify, prioritize and respond to evaluation capacity needs of community-based organizations (CBOs) conducting HIV primary and secondary prevention. First, The Morehouse School of Medicine Prevention Research Center (MSM PRC) Evaluation brought a participatory evaluation approach, expertise in qualitative and quantitative evaluation and cultural competence in community-based research to the partnership. Second, The Pfizer Foundation Southern HIV/AIDS Prevention Initiative (Pfizer Initiative) represented an organizational will and fiscal capacity to support CBOs conducting HIV prevention in the South. Third, Pfizer Initiative-funded CBOs (Pfizer Grantees), supported by funds up to $55,000, were central partners, bringing innovative approaches to sexual health promotion and HIV prevention, and a direct link to communities and stakeholders. The purpose of this article is to report the process of community-campus engagement in an initiative developed to understand and build evaluation capacities of promising community-based organizations.

MethodsEvaluability assessment

Step One. Semi-structured teleconference interviews were conducted between June and August 2004 with each Pfizer Grantee to gain a clearer understanding of proposed program implementation, evaluation plans, and program contexts. Central to these discussions were grant proposals submitted and funded by the Pfizer Initiative.28 Teleconferences were developed 1) to gain insight into how programs may have evolved in scope, goals and objectives since the initial proposal submission; 2) to identify how program success would be defined; 3) to assess intervention data needs, current data collection methods and procedures; 4) to identify how collected data would be used to assess program success; and 5) to determine technical assistance needs. This process was critical in order to determine whether each CBO was prepared to engage in evaluation capacity-building and cross-site assessment. A by-product was identification of each organization’s perceptions of the links between program components and outcomes, as well as the organizational and community factors that moderate program function.

Step Two. The MSM PRC assisted each Pfizer Grantee in the development of a logic model specific to their program needs and foci. The logic model conceptualization provides a blueprint for developing a clear connection between program components, facilitates timely monitoring of planned activities, and illustrates the order of events necessary for successful achievement of objectives.2934 A logic model is a conceptual and visual depiction of a program’s goals, inputs, strategies, outputs, outcomes, objectives, and the assumed causal relationship between them. A major strength of logic models is their utility in illustrating the linkages between existing conditions, activities, outcomes and effects. Each element should be guided and informed by community stakeholders, among others, to ensure that both community needs and assets are understood and that activities are realistic and appropriate for the community context.35 Logic model development is important to evaluability assessment because it supports determination of whether programs have intentionally thought through the connection between their effort and anticipated outcomes. Preliminary logic models were drafted by the MSM PRC based on program descriptions submitted in initial funding applications and semi-structured teleconference interviews.

Step Three. Site visits were conducted by MSM PRC trained evaluators July to September 2004 to gain a first-hand perspective into the contexts in which each Pfizer Grantee existed. Meetings with key administrative staff, program staff and target audiences provided a two-way learning opportunity for the MSM PRC and CBOs 1) to establish trust among key stakeholders; 2) to engage in dialogue and to review documents critical to assessing evaluation capacities, including drafted logic models; and 3) to discuss technical assistance related to Pfizer Initiative evaluation requirements. Logic models drafted by the MSM PRC were also revised and approved by each Pfizer Grantee during site-visits to ensure that they accurately reflected program components and success indicators. Table 1 includes results of the EA process that advised evaluation capacity-building activities.

Each step in the evaluability assessment was systematically staged to build upon each other in preparation for evaluation capacity-building through understanding of each CBO’s program and evaluation context. Employing a one-size-does-not-fit-all approach, the semi-structured interviews engaged each CBO to better understand evaluation TA needs, as well as to contribute to identifying content for cross-site evaluation capacity-building implementation. The information gathered through each interview was used to develop the drafted logic model representing each program’s activities and to link program success and its measurement. Site visits were conducted to not only confirm the accuracy of the logic model drafted by the MSM PRC with CBO program staff, but to establish trust, positive rapport, and to identify local program stakeholders that would be engaged in the Pfizer Initiative over time. As detailed by Painter et al., the exercise of EA lies in its ability to explore (compare and contrast) perspective of the program among program stakeholders, defining the programs logic, and defining program achievements towards the development of more useful and culturally relevant evaluation.6 Figure 1 includes each step in the EA process, how each preceding step informed the next towards the development of evaluation capacity-building implementation (Step 4).

Evaluation capacity-building and measurement

Step Four. Morehouse School of Medicine PRC used EA results to develop evaluation capacity-building content and subsequent implementation. The MSM PRC rationale for development of targeted approaches, advised by the EA, that are tailored to community-stakeholders is supported in the literature, which indicates that the design of activities must account for, among other elements, ECB participant characteristics, assessing their organizational resources.3637 Pfizer Grantee executive directors and program staff members were actively engaged in formal didactic presentations, in-person training workshops, teleconferences, and web conferences (all with feedback and discussion sessions) designed to enhance capabilities to plan, implement, and evaluate their community-level interventions to prevent HIV/AIDS. Capacity-building used a participatory evaluation framework with all sessions offering interactive opportunities to apply learning to funded interventions and discussions to maximize inputs through opportunities for questions and answers, reflection on and application to program implementation challenges and experiences.22,34,38 Table 1 contains the schedule and content description of evaluation capacity-building provided.

As part of its overall evaluation, MSM PRC developed and conducted an initial cross-site program assessment survey (C-PAS) of the CBOs to determine their knowledge, skills, and abilities for planning and conducting community interventions and their technical needs.26 The C-PAS was adapted from a previous instrument used successfully in substance abuse intervention training, pretested among MSM PRC community coalition board members, and reviewed and modified prior to administration.38

The C-PAS questionnaire was designed to be completed in 15–20 minutes and to capture the key personnel’s self-reported knowledge and skills relating to key steps in developing, implementing, and evaluating a community-based intervention as well as the organization’s specific abilities to perform essential functions. We used knowledge to refer to the individual understanding of queried activities and skills to represent individual proficiency to perform certain activities. Knowledge and skills were measured on a five-point Likert scale using the six key variable constructs of community program planning and implementation: problem identification, needs assessment, developing goals and objectives, gathering program input and feedback, prevention implementation planning and evaluation of community intervention. Technical assistance needs were assessed (as a yes/no variable) for the following: logic model development, data collection tool development, data management, protocol development, qualitative and quantitative methods, and evaluation. Table 2 provides examples of questions including the C-PAS questionnaire.

The C-PAS was initially administered February 2005 and at two subsequent time points to determine changes in knowledge, skills, abilities, and technical needs prior to each capacity-building effort during the Initiative. Self-administered questionnaires were sent to each organizational designee by U.S. and electronic mail to gain varying perspectives on program processes. Pfizer Grantees received two U.S. mail reminders and one telephone reminder to return surveys. The survey response rate was 84.8% for the first C-PAS, 78.3% for the second C-PAS and 93.2% for the third C-PAS. Differences in responses among executive directors and primary program staff people were combined due to non-significant differences between groups.

Pfizer grantee perspectives of evaluation capacity-building partnership process

Following completion of all evaluation-capacity-building activities, MSM PRC conducted one-on-one semi-structured teleconferences from September to December 2006 with each Pfizer Grantee to gather feedback on the evaluation capacity-building partnership throughout the Initiative and continued challenges and technical assistance to be overcome as they worked to implement new competencies after the Initiative’s conclusion. Teleconferences were conducted with each Pfizer Grantee organization between October and December 2006. Twenty Pfizer Grantees participated in the teleconference, resulting in a 91% participation rate. Each teleconference was taped with prior permission of participants and transcribed. Qualitative data analysis was conducted by two reviewers, who independently identified trends and themes in responses for teleconferences.

ResultsEvaluability assessment

Evaluability assessment helped to identify areas for targeted evaluation capacity-building among Pfizer Grantees. Challenges identified during this first step were the evidence-base for the development of evaluation capacity-building activities for the duration of Pfizer Initiative. Each area described below represents identified challenges. Table 3 details each identified challenge that was addressed.

Evaluation capacity-building assessment

Comprehensive C-PAS results have been detailed elsewhere and are briefly highlighted here to indicate key successes in implementation and to provide context associated with the implementation of community-engaged model and its associated results.34 They indicate a steady positive progression in CBO’s abilities to plan, implement and evaluate programs. Further, technical assistance needs significantly decreased from 48.7% for logic model development from the initial C-PAS to 27.8% for the second CPAS and 12.2% for the third C-PAS (p=.002). The technical assistance needs for utilizing qualitative and quantitative methods also decreased, from 64.1% in the first C-PAS to 50.0% and 36.3% in the second and third C-PAS, respectively (p=.048). Evaluation development technical assistance needs decreased from 84.6% at the initial C-PAS to 63.9% in the second C-PAS and 39.0% in the third C-PAS (p<.001). Table 4 includes a sample of questions included in the C-PAS questionnaire.

Table 4 details composite scores for knowledge, skills, and abilities measured across each administration of the C-PAS. Individual knowledge scores, among those in the composite, associated with evaluation indicated significant increases for goal and objective development at the second and third C-PAS (p=.045). Among individual ability scores in the composite, developing data collection tools (p=.002) and analyzing collected data (p=.011) indicated significant increases across C-PAS administrations.

Post-evaluation capacity-building Pfizer grantee semi-structured interviews

Pfizer Grantee organizations were asked to describe the aspect(s) of evaluation capacity-building activities provided by the MSM PRC as a means through which the MSM PRC could identify the strengths and challenges of its ECB model. The training format preferred by most Pfizer Grantees was face-to-face conferences (cited 14 times). Hands-on application and networking were valued characteristics of the face-to-face format. The quotations below represent the perspectives of Grantees on the process.

Okay, definitely the face-to-face are much, much, much more effective …. They just are. I don’t think you can really make up for being together in the same room sharing. It’s just so much more education that way. I know it’s much more expensive to do things that way but it really is the thing that works the best.

…I’d say the hands-on application …because we had opportunities to participate in several group-related activities to apply some of the information that we’ve learned just with each other but also knowing how to bring back once we got back to our individual sites.

…Cause I think you get more and plus you’re focused then. On teleconferences no matter how much you’re trying to not have other stuff around you it’s hard to keep that away.

Organizations identified data entry training, logic model training, and qualitative training, respectively, as the most valuable preferred content areas offered by MSM PRC in evaluation capacity-building activities.

Epi Info …that was that thing that we’re going to be using more than anything else because what we needed was to have a way of collecting data that was simple and efficient and that anybody could utilize and anyone can use.

The other thing is the logic model of the subjects that we learned, that was the one that is most effective and that we needed to know more than anything else because that’s something that we can use for other grants and other programs and it helps us set up our programs in a logical fashion that shows effective change and then the other thing is that the last conference we were talking about data collection and things like that at the conference.

Well, the logic models was the best thing that happened and the fact that it was insisted and stressed so much at every meeting and every encounter that we had with [MSM PRC] I think it was great. Plus it’s easy for us [not] to develop a logic model for other programs that we had or programs that we wanted to implement.

Pfizer Grantees were asked if there were areas specific to their organizations that continue to be program evaluation challenges. Most discussed program evaluation challenges related to data collection, entry, and analysis.

All that [data] is handwritten now and how we could capture that on a form and then it in a system that we can sort of look at is our challenge.

Yeah I would say the data input. We’re just very small. We don’t have anyone. It is partly our fault too but it’s an area that we constantly face [as] a challenge.

…Looking at having a consistent model to track our staff so they have better understanding—our entire staff-so they have a better understanding of data collection and analysis and then have regular like quarterly opportunities to review data, to analyze it and to reflect on it as a staff.

Organizational and staff resources were all cited challenges faced by Pfizer Grantees when considering the priority of evaluation. Many expressed program evaluation challenges surrounding staff and resources and staff involvement and buy-in.

Well there is a cap, there is a 7% cap on the amount of money, federal money, that you can use in your agency for administration. So if you can imagine you have a $100,000 grant but you can only use 7% of it for any of your infrastructure, any of your data collection and all of that doesn’t count, it makes it very, very difficult because they don’t want you spending their money on [evaluation] …many of our agencies run with this Catch-22.

…A couple of us [are] no longer here and I have myself but there’s only a couple of people with the organization that understand what evaluation is, what its value is, why we do it and the fact of the matters is if the staff doesn’t understand and appreciate evaluation …. So I think that talking about evaluation with our whole staff is important because they’re the ones that are going to input the data. They’re the ones that are going to be collecting this.

I think our biggest challenge is time or being understaffed here and it’s hard to really allocate the time to do this because it is important and it’s hard to take the time and do it when you have so much else to do ….

Discussion

The mixed-method approach employed by the MSM PRC provided an evidence based model by which to assess both the processes and outcome of community-engaged evaluation capacity-building. The C-PAS indicated significant improvement, as previously noted, among Pfizer Grantees to plan, conduct, and evaluate interventions and significantly decreased technical assistance needs directly before evaluation capacity-building events began, with follow-up assessment after one year, and at the conclusion of evaluation capacity-building activities. Following the conclusion of evaluation capacity-building activities, 2006 Pfizer Grantee semi-structured interviews were a critical process through which the MSM PRC could conclude evaluation capacity-building activities with perspectives and perceptions of the Pfizer Grantees, the target audience for whom the Initiative was developed. Organizations considering similar initiatives should consider the follow recommendations detailed below when planning their own support capacity-building work among similar CBOs.

Conduct systematic evaluability assessment throughout the course of funding support

Assessment should be on-going in order to increase the likelihood of local evaluation buy-in and the diffusion of skills, knowledge and abilities within organizations. In addition to limited resources (human, fiscal and time), many organizations experience high staff turnover at the administrative and support levels. Participants in capacity-building activities during the first year of a project may no longer be involved in projects by the funding cycle’s conclusion. This may affect the degree of improvement in evaluation capacity. On-going assessment will also help in the design of training activities that can reasonably accommodate the format and content preferences of participants.

Adopt practical, hands-on learning opportunities for adult learners

Pfizer Grantees preferred methods that would be most relevant to their current programmatic needs, rather than those that were abstract or perceived as “academic.” Sensitivity to the balance between theoretical relevance of reliable and valid evaluation measures must be coupled with a respect for the resources and value-added from the perspective of the target audiences. Some organizations reported the value of electronic tools and templates that could also be conveniently used at their local organizations such that they would not, as they said “have to reinvent the wheel.”

Employ quantitative and qualitative methods to measure the success of program processes and outcomes

Knowledge, ability and skill acquisition are often challenged by the contexts within which programs are implemented. While quantitative (e.g., survey) data provide a perspective that is frequently said to be more objective, qualitative data collected from the target audiences, through teleconferences, focus groups and open-ended questions help to shed light on what the data means. Both methods can be used together to develop the most effective strategies which are tailored and informed by the unique needs of recipients.

Offer or facilitate on-going technical support

While Pfizer Grantees have made important strides in evaluation capacity, it has been incremental and operationalized within the contexts of staff turnover, the competing demands of stakeholders, and among other priorities. Organizations have stressed the importance of offering program-specific support in areas including data entry, management and analysis. Programs may alternatively be offered resources through which they can access free or low cost trainings or on-line modules that can be shared with other program staff in order to institutionalize evaluation skills within their organizations.

Conclusion

Evaluability assessment, capacity-building activities and qualitative assessment show that evaluation capacity-building is an iterative process, requiring partnerships between funders, evaluators and grantees over time to fine-tune strategies in an audience-targeted manner because one size does not fit all. Current trends in community-based participatory research and evaluation encourage the empowerment of community organizations to understand, plan, and conduct local evaluation efforts. The process toward these practices necessitates the acquisition of practical skills that are valued within each organization and become part of all programmatic activities. The benefits of evaluation skill development and buy-in include the development of measurable indicators, replicable implementation and evaluation plans, and dissemination of systematic, data-driven results, greatly influencing a program’s likelihood of sustainability. While processes, outcomes and recommendations described in this article represent a community-campus partnership to strengthen small to mid-sized community-based organizations (organizations with annual revenues of less than $1 million) conducting HIV/AIDS prevention in the South, this model may be adapted to similar partnerships with organizations that are addressing other chronic health disparities. Stated differently, strategic community-based participatory approaches, including evaluability assessment and evidence-based capacity-building initiatives are relevant approaches to consider to increase the likelihood of ownership and buying in evaluation partnerships with CBOs.

The MSM PRC seeks to identify the degree to which programs are achieving developed goals and objectives and to enhance the skills critical to planning, implementing, and evaluating programs for increased program effectiveness and sustainability. This report represents evaluation measures, interactions and processes that were integral to increased evaluation capacity, as measured by results of the Cross-site Program Assessment Survey (see Table 4). It is anticipated that the dissemination of cross-site program assessment and capacity-building activities will help to inform and improve the quality of measurable, replicable evidence-based HIV/AIDS prevention programs.

Acknowledgments

This project was made possible through funding from the Pfizer Foundation and The Morehouse School of Medicine Prevention Research Center CDC grant #1U48DP001907-01

ReferencesCenters for Disease Control and PreventionHIV Surveillance Report: diagnoses of HIV infection and AIDS in the United States and dependent areas 200820106Vol. 20Atlanta, GACDC/HIV Surveillance ReportAvailable at: http://www.cdc.gov/hiv/surveillance/resources/reports/2008report/pdf/2008SurveillanceReport.pdf.Centers for Disease Control and PreventionHIV Surveillance Report: diagnoses of HIV infection and AIDS in the United States and dependent areas, 200920112Vol. 21Atlanta, GACDC/HIV Surveillance ReportAvailable at: http://www.cdc.gov/hiv/surveillance/resources/reports/2009report/pdf/2009SurveillanceReport.pdf.DiFranceiscoWKellyJAOtto-SalajLFactors influencing attitudes within AIDS service organizations toward the use of research-based HIV prevention interventionsAIDS Educ Prev19992111728610070591RichterDLPrinceMSPottsLHAssessing the HIV prevention capacity-building needs of community-based organizationsJ Public Health Manag Pract2000764869710977620CashmanSBAdekySAllenAJ3rdThe power and the promise: working with communities to analyze data, interpret findings, and get to outcomesAm J Public Health200889881407141718556617PainterTMNgalamePMLucasBStrategies used by community-based organizations to evaluate their locally developed HIV prevention interventions: lessons learned from the CDC’s innovative interventions projectAIDS Educ Prev20101022538740120973660KegelesSMRebchookGMChallenges and facilitators to building program evaluation capacity among community-based organizationsAIDS Educ Prev2005817428429916178701Centers for Disease Control and PreventionEvaluating CDC-funded health department HIV prevention programs20018Atlanta, GACenters for Disease Control and PreventionAvailable at: http://www.cdc.gov/hiv/topics/evaluation/health_depts/guidance/.Centers for Disease Control and PreventionHIV prevention strategic plan through 20052001Atlanta, GACenters for Disease Control and PreventionAvailable at: http://www.cdc.gov/hiv/resources/reports/psp/pdf/prev-strat-plan.pdf.RuggDBuehlerJRenuadMEvaluating HIV prevention: a framework for national, state, and local levelsAm J Evaluation199932013556U.S. Congress/Office of Technology AssessmentThe effectiveness of AIDS prevention efforts (OTA-BP-H-172), 19951995Washington, DCU.S. Congress/Office of Technology AssessmentAvailable at: http://www.fas.org/ota/reports/9556.pdf.ArgyrisCSchönDAOrganizational learning: a theory of action perspective1978Reading, MAAddison-WesleyAtkissonCCHargreavesASchulbergHCSheldonABakerFA conceptual model for program evaluation in health organizationsProgram evaluation in the health fields1979Vol 2New York, NYHuman Sciences PressBirelsonPTurning child and adolescent mental-health services into learning organizationsClin Child Psychol Psychiatry1999442265274BrazilKA framework for developing evaluation capacity in health care settingsInt J Health Care Qual Assur Inc Leadersh Health Serv1999121vixi10351018GentryDGilliamAScottKEvaluation of the national and regional minority organizations (NRMO) initiative1999St. Louis, MOSaint Louis University School of Public Health(Technical Report Vol. 1)LoveAJDeveloping effective internal evaluation1984San Francisco, CAJossey-Bass, Inc.PattonMQUtilization-focused evaluation: the new century text1997Thousand Oaks, CASage PublicationsPreskillHTorresRTThe learning dimension of evaluation useNew Directions for Evaluation2000882537StockdillSHBaizermanMComptonDWToward a definition of the ECB process: a conversation with the ECB literatureNew Directions for Evaluation2002Spring93125Centers for Disease Control and PreventionFramework for program evaluation in public healthMMWR Recomm Rep1999948RR-11140CheadleASullivanMKriegerJUsing a participatory approach to provide assistance to community-based organizations: the Seattle partners community research centerHealth Educ Behav2002629338339412038745ChenHTDesigning and conducting participatory outcome evaluation of community-based organizations’ HIV prevention programsAIDS Educ Prev20026143Suppl A182612092933NappDGibbsDJollyDEvaluation barriers and facilitators among community-based HIV prevention programsAIDS Educ Prev20026143Suppl A384812092935TrevisanMSEvaluability assessment from 1986 to 2006Am J Evaluation20079283290303ThurstonWEGrahamJHatfieldJEvaluability assessment: a catalyst for program change and improvementEval Health Prof2003626220622112789712DwyerJJHansenBBarreraMMaximizing children’s physical activity: an evaluability assessment to plan a community-based, multi-strategy approach in an ethno-racially and socio-economically diverse cityHealth Promot Int2003918319920812920140MayberryRMDanielsPAkintobiTHCommunity-based organizations’ capacity to plan, implement and evaluate successJ Community Health20081033528529218500451MonroeMCFlemingMLBowmanRAEvaluators as educators: articulating program theory and building evaluation capacityNew Directions for Evaluation2005Winter1085771McLaughlinJAJordanGBLogic models: a tool for telling your program’s performance storyEvaluation and Program Planning199922216572PattonMQQualitative research and evaluation methods20023rd ed.Thousand Oaks, CASage PublicationsW.K. Kellogg FoundationUsing logic models to bring together planning, evaluation, and action: logic model development guide2004Battle Creek, MIW.K. Kellogg FoundationAvailable at: http://www.ncga.state.nc.us/PED/Resources/documents/LogicModelGuide.pdf.MillerTIKobayashiMMNoblePMInsourcing, not capacity-building, a better model for sustained program evaluationAm J Evaluation200632718394MayberryRMDanielsPYanceyEEnhancing community-based organizations’ capacity for HIV/AIDS education and preventionEval Program Plann2009832321322019376579JulianDAThe utilization of the logic model as a system level planning and evaluation deviceEval Program Plann19978203251257PreskillHBoyleSA multidisciplinary model of evaluation capacity-buildingAm J Evaluation200812294443459ArnoldMEDeveloping evaluation capacity in extension 4-H field faculty: a framework for successAm J Evaluation20066272257269AkintobiTHYanceyEMBerryJCross-site program evaluation final report2007Atlanta, GAMorehouse School of Medicine Prevention Research Center/The Pfizer Foundation Southern HIV/AIDS prevention initiativeAvailable at: http://www.pfizer.com/files/philanthropy/MSM_PRC-Pfizer_SHAPI_Final_Report.pdf.

Evaluability assessment and evaluation capacity-building steps.

CBO = Community based organizations

MSM-PRC = Morehouse School of Medicine—Prevention Research Center

CBO EVALUABILITY ASSESSMENT RESULTS

Evaluability ChallengeOpportunity for Capacity-building
Development of Measurable Change ObjectivesWhile there was a clear articulation of outputs, or the products of each strategy or activity (i.e., the number of peer educators trained, condoms distributed). Very few identified measurable objectives to gauge short- or mid-term change resulting from these efforts. Outcome-based evaluation requires systematic collection of data that will capture desired change, beyond the documenting of numbers of activities and related products.
Selection/Revision of Methods and Tools to Measure Progress and Desired OutcomesSeveral Pfizer Grantees expressed the desire to revise or amend existing data collection tools to assess not only HIV knowledge but risk perceptions or behavioral intentions among prioritized target populations. Further, few had used both qualitative and quantitative data to measure processes and outcomes
Data Collection, Storage and AnalysisLimited evaluation accountability in previously funded programs, limited time and staff to spend on systematic data collection or entry were among reasons cited for little or no systematic data management systems. Further, most did not have an electronic database for survey data collected. These factors limited the degree to which data quality checks, and subsequent data analysis could be conducted to assess interventions.

CROSS-SITE PROGRAM ASSESSMENT SURVEY (C-PAS) SAMPLE QUESTIONS

QuestionAnswer Choices
Please rate your organization’s ability to:

Develop data collection tools

Conduct focus groups

Enter collected data into the computer analyze collected data

On a scale of 1 (Low) to 5 (High)
Please indicate your current level of skill (knowledge or ability, respectively) related to each of the following steps in community program development:

Development of goals and objectives

Data Management

Qualitative Methods

1 = None, 2 = Little, 3 = Some, 4 = A lot, 5 = Extensive
Please indicate all technical assistance needs:

Logic model development to help chart a clearly visible path or program planning

Data Management

Data Collection Tool Development

Protocol Development to increase consistent data collection, management and analysis

Qualitative/Quantitative methods to learn the best ways to measure expected changes

Yes or No

C-PAS = Cross-site program assessment survey

CBO EVALUATION CAPACITY-BUILDING AND MEASUREMENT SCHEDULE 2004–2006

ActivityDescription
Pfizer Foundation Southern HIV/AIDS Prevention Initiative Orientation to Evaluation, June 2, 2004An introduction to evaluation philosophy and methods and procedures of community intervention planning, implementation, and evaluation.
1st cross-site program assessment survey (C-PAS) conducted: February–March 2005Baseline Assessment
Cross-site Evaluation Teleconferences and Site Visits, Fall 2004Conducted teleconference with each grantee to gain better insights into the background of the programs, current status of intervention(s) development, and future plans. Subsequent site visits to each organization were also conducted by a member of the evaluation team and a member from Pfizer to gain additional insights of organizations environments, HIV/AIDS and other programs.
Training Workshop, May 11–12, 2005Conducted in response to identified evaluation challenges and capacity-building needs during project year 2004, including data collection methods, tools, and analysis; database development; and logic model review.
Training Teleconferences, April–August 2005A total of 12 teleconferences were designed and conducted to facilitate evaluation capacity-building opportunities through (a) reinforcement of intervention skills attained, (b) provision of an outlet for evaluation resource-sharing, and (c) discussion of real-time evaluation case studies that may be applied to individual program activities.
2nd C-PAS conducted: February–March 2006Follow-up Assessment 1
Training Conference, June 14–16, 2006Provided in-depth training in areas identified through evaluation of C-PAS findings, capacity-building activities, and feedback from grantees, including developing survey questions and focus group guides; qualitative and quantitative data entry, management, and analysis; and use of collected data.
Training Web Conferences, March and August, 2006Two capacity-building training web conferences were developed and facilitated Year 3, designed to prepare grantees for sustained programmatic and evaluation activities beyond the last year of the initiative. Each web conference was offered twice, on separate days, to allow for small group interaction among 9–11 grantees, and to accommodate scheduling needs.
Program Assessment Teleconferences, August 9–10, 2006One-on-one, semi-structured teleconferences were conducted to gain better insight into grantees’ technical assistance needs for completion of the 2006 Program Assessment and continued evaluation challenges that remain at the conclusion of the initiative.
3rd C-PAS conducted: September–October 2006Final Assessment

CBO = Community based organizations

C-PAS = Cross-site program assessment survey

CROSS-SITE PROGRA M ASSESSMENT SURVEY (C-PAS) SKILL, KNOWLEDGE AND ABILITY SUMMARY SCORES

C-PASAdministrationsSkillaMedian (IQR)KnowledgeaMedian (IQR)AbilitybMean (S.D.)
1st (n=39)2.83 (0.67)2.83 (1.33)3.88 ± 0.42
2nd (n=36)3.00 (0.75)3.00 (0.66)4.07 ± 0.45
3rd (n=41)3.16 (0.83)c3.17 (0.34)d4.21 ± 0.46e

Skills and knowledge based on a five-point Likert scale of 0 (none) to 4 (extensive); summary score of six items.

Abilities based on a five-point Likert scale of 1 (low) to 5 (high);summary score of 20 items.

Overall trend in higher summary skills scores is not statistically significant, p=.057.

Overall trend in higher summary knowledge scores is statistically significant, p=.022.

Overall trend in higher summary abilities scores is statistically significant, p=.0004.

IQR = Interquartile Range

SD = Standard Deviation