10127394634877Prog Community Health PartnershProg Community Health PartnershProgress in community health partnerships : research, education, and action1557-05411557-055X23221296357568110.1353/cpr.2012.0062NIHMS439561ArticleMethods for Linking Community Views to Measureable Outcomes in a Youth Violence Prevention ProgramMcDonaldCatherine C.12RichmondTherese S.12GuerraTerry134ThomasNicole A.14WalkerAlia145BranasCharles C.16TenHaveThomas R.16VaughnNicole A.18LeffStephen S.196HausmanAlice J.17Philadelphia Collaborative Violence Prevention CenterUniversity of Pennsylvania, School of NursingACHIEVEabilityPhiladelphia Area Research Community Coalition (PARCC)Earths Keepers, IncUniversity of Pennsylvania, Perelman School of MedicineTemple University, Department of Public HealthDrexel University, School of Public HealthThe Children’s Hospital of Philadelphia, Pediatric Psychology122013Winter2012192201364499506Background

All parties in community–academic partnerships have a vested interest prevention program success. Markers of success that reflect community’s experiences of programmatic prevention success are not always measurable, but critically speak to community-defined needs.

Objective

The purpose of this manuscript was to (1) describe our systematic process for linking locally relevant community views (community-defined indicators) to measurable outcomes in the context of a youth violence prevention program and (2) discuss lessons learned, next steps, and recommendations for others trying to replicate a similar process.

Methods

A research team composed of both academic and community researchers conducted a systematic process of matching community-defined indicators of youth violence prevention programmatic success to standardized youth survey items being administered in the course of a program evaluation. The research team of three community partners and Five academic partners considered 43 community-defined indicators and 208 items from the youth surveys being utilized within the context of a community-based aggression prevention program. At the end of the matching process, 92 youth survey items were identified and agreed upon as potential matches to 11 of the community-defined indicators.

Conclusions

We applied rigorous action steps to match community-defined indicators to survey data collected in the youth violence prevention intervention. We learned important lessons that inform recommendations for others interested in such endeavors. The process used to derive and assess community-defined indicators of success emphasized the principles of community-based participatory research (CBPR) and use of existing and available data to reduce participant burden.

Community-based participatory researchhealth promotionprocess issuesadolescent developmentcommunity health partnershipsNational Institute of Nursing Research : NINRT32 NR007100 || NRNational Institute of Nursing Research : NINRF31 NR011107 || NR

Community involvement in all stages of program development, implementation, and evaluation is now a standard of public health practice. Essential to sustainable collaboration is the ability to demonstrate the “return on investment” to a wide variety of stakeholders.15 Community and academic parties have a vested interest in seeing programs succeed, though definitions of success may vary. The dilemma is that too often outcomes only “speak” to academic partners. Markers of success that reflect the community’s experience of a prevention program are not always measureable with standardized instruments, raising questions of reliability, validity, and generalizability.6 The challenge is to create reliable and valid measures of program success that rigorously measure impact of interventions on dimensions thought to be important to the local community. Through such measures, principles of knowledge sharing and co-learning fundamental to CBPR can be more fully integrated into program evaluations and evidenced-based practice.4

Francisco and Butterfoss7 propose three main points to consider when evaluating community programs in a manner designed to communicate success to communities: (1) Choice of datasets, (2) relevance of the data to the problems addressed, and (3) rigor of collection and presentation. Drawing from these key points, the long-term goal of this study was to develop measures of locally relevant, community-defined dimensions of program success of a youth violence prevention intervention. By “community defined,” we mean that the indicators are based on dimensions and constructs specified by participants who live in the community, and which may or may not coincide with outcomes set for the intervention at the outset. Creating new measures that are reliable and valid is a longer term effort. Thus, this manuscript focuses on the first phase of the process where we sought to link community-defined indicators to available data in our violence prevention intervention. The purpose of this manuscript was to describe this process and discuss lessons learned, next steps, and recommendations for others trying to replicate a similar process. The community and academic partners working together in this process are members of the Philadelphia Collaborative Violence Prevention Center (PCVPC) and collectively applied rigorous action steps.

Partnerships in the PCVPC

The PCVPC is a Centers for Disease Control and Prevention funded Urban Partnership Academic Center of Excellence established in 2006 that is a collaboration of four academic institutions and a community research collaborative, the Philadelphia Area Research Community Coalition (PARCC).8 PARCC, organized in 2005, is comprised of about 20 community organizations conducting health related programs in Wests/Southwest Philadelphia, representing many different stakeholders; grassroots, school-based, faith, academic, private nonprofits, and government. PARCC was organized out of the expressed interests of communities in the West/Southwest Philadelphia area to become partners with academic researchers in CBPR.9 PCVPC is built on principles of CBPR with community representatives active in all aspects of center administration and research. From the inception of the response to the CDC’s call for proposals for youth violence prevention centers, community and academic partners worked together to create a study design that targets questions of interest and needs of the community. At the core of PCVPC is a rigorously designed, randomized trial of a youth violence prevention intervention for youth ages 10 to 14 called PARTNERS, implemented in community settings in West and Southwest Philadelphia.10 The mission of PCVPC is to design, implement, and evaluate programs that enhance the resilience of communities affected by violence and to reduce the frequency and impact of youth violence, injury and death in Philadelphia.11

METHODOLOGICAL APPROACH TO CREATING MEASURES OF COMMUNITY DEFINED INDICATORS

The first phase of the process of creating measures of community-defined indicators of program success involved three steps of matching indicator constructs to available measures and existing data. Figure 1 summarizes the three steps as well as planned next steps in our process. Two underlying principles guided this effort: Community partners were involved in all phases of the work and the use of existing and available data addressed both ethical and practical concerns regarding research burden and access to information. Institutional review board approval was obtained by the sponsoring institution/university.

Step 1: Identifying Community-Defined Indicators

As identified in Figure 1, step 1 of identifying community-defined indicators of programmatic success of a youth violence prevention program involved focus groups and community engagement activities. During planning for the PARTNERS project, qualitative, participatory methods were employed to help the program developers “hear” and accommodate how the community casts and prioritizes the problem of youth violence.12 With recruitment efforts fostered by PARCC, four focus groups were held with community residents (n = 22), youth-serving agency representatives (n = 11), parents and caregivers (n = 3), and community leaders (n = 10). Results from the four focus groups and other community engagement activities revealed a total of 43 community-defined indicators reflecting community perceptions of violence prevention program success.13 Examples of the community-defined indicators included traditional outcomes such as reduced violence and neighborhood trash, but also included newer constructs such as “more adults intervening for youth,” which was defined as the expectation that adults would reach out for youth in positive ways. The results are described in detail in Hausman and colleagues.13

Step 2: Matching Community-Defined Indicators With Publically Available Data

Reported more fully in Hausman and colleagues,13 step 2 in Figure 1 involved matching the community-defined indicators with existing and publically available data. This step involved an iterative process of review of data availability and accessibility, and feedback from members of our community advisory board. Matching efforts focused on large publicly available data sets, such as crime data, the Litter Index,14 and other regionally specific population surveys: 23 databases with a total of 47 datasets were reviewed. Publically available data were found for 19 of the 43 community-defined indicators. For example, “Cleaner Streets, Cleaner Neighborhoods” was considered measureable by the Litter Index, a nationally standardized measure collected locally in Philadelphia.14 Review and feedback from our community advisory board indicated that only two of the identified sources of data were considered unreliable to community leaders and therefore were not further considered. For example, one dataset that we accessed was not suitable for our work because the process by which the community products it counted reflected larger political forces and not local, community-driven efforts.

Step 3: Matching Community-Defined Indicators to Data Collected in the Youth Violence Prevention Intervention

Step 3 of matching community-defined indicators to available data focused on data being collected for the preliminary evaluation of the PARTNERS intervention project.10 Step 3 in Figure 1 is the key focus in this manuscript and emphasized the use of data already collected or planned for collection. This strategy provided several key advantages. First, we did not add to the participant’s research burden, an underlying value consistently expressed by the members of PCVPC and the larger community. Respect for this concern focused the measurement building process on data that was or would become available through the surveys already being implemented in the community for the PARTNERS evaluation. Second, the PARTNERS evaluation used a variety of established psychometric scales with known reliability and validity to evaluate specific constructs in youth development. The standardized instruments used by the PARTNERS team to evaluate the effectiveness of the intervention included measures of aggression, oppositional defiant disorder, social information processing, anger management, attitudes towards violence, youth assets, self-esteem and leadership.1526 These instruments provided a pool of individual items whose essential quality could be relied upon. Both academic and community partners from PARCC saw strength in using these data from the PARTNERS project, with the anticipated possibility that we could eventually see how the “new” community constructs would compare with those measured by the established scales. The matching process was conducted as “proof of concept” that elements of existing standardized psychometric tools could be used to measure community defined constructs that reflect but not replicate more traditional intervention outcomes. For purposes of describing the process, we focus herein on the results of the matching process involved with the youth instruments.

An eight-member team composed of five academic and three community researchers from the PCVPC was formed to conduct a systematic item-by-item review of the evaluative standardized instruments administered in the youth violence prevention intervention. For clarity here, we will call these team members “raters.” Academic partners included four faculty members and one doctoral student training with the PCVPC. The three community members were members of PARCC and PCVPC, and lived and/or worked in West/Southwest Philadelphia. They had backgrounds in business, grassroots community organizations, and community and economic development. The community members were nominated by PARCC to participate in this research because of their ability to represent the intervention community and they demonstrated a clear interest in promoting the health of the communities in West/Southwest Philadelphia. These community members had been involved with the development and implementation of PCVPC’s research endeavors from the outset.

The process of matching the indicators to the evaluation tools started with having one rater (an academic partner) review all of the items in the youth surveys used in the evaluation (n = 208). The first rater assigned each item to an indicator where it appeared to be relevant; if no match was found, the item was discarded from the matching process. The academic and community partners chose this as a first step to help expedite the process of review. Although several members of the research team had participated in the original analysis of the focus group data and the entire research team had discussed the community-defined indicators, the research team did not formulate standardized definitions of the community-defined indicators to use during the matching process. This opportunity for further interpretation had strengths and limitations.

During the initial step with the first rater, 98 youth survey items were matched to 11 (of the 43) community-defined indicators. In keeping with a process that aimed to be inclusive of different interpretations for matching, individual survey items could be matched to more than one community-defined indicator. The 11 community-defined indicators initially matched by the first rater were academic performance, future orientation, helping others, increased civility, decreased truancy, more participation in community organizations, less cursing, more parental involvement, showing kids love, more adults intervening for youth, and kids helping around the house.

Once items were grouped under their matched community-defined indicators by the initial rater, the seven other raters reviewed the initial matching and scored their agreement (yes/no) with the match. The matching by the initial and subsequent raters was recorded and examined for patterns of agreement. Results of the matching process for each item were discussed among the team and this provided opportunity for any needed clarification or questions answered. We then reviewed the patterns of agreement across the team for each set of items matched for each indicator. After discussion and review of the empirical data from all raters, the research team decided that five of the remaining seven raters needed to agree on a match in order for an item to be retained for future analyses. This allowed for a clear majority of the group to agree on a matching. Additionally, this solidified that no item would be retained that had the three community team partners disagreeing with a match. At the end of the process, 92 youth survey items were identified and agreed upon as potential matches to 11 community-defined indicators. For example, 14 items from the Alabama Parenting Questionnaire,17 1 item from the HARE-Area Specific Self-Esteem Scale,16 and 3 items from the Youth Asset Survey15 matched to the indicator “More Parental Involvement.”

In our matching results, it is important to note that disagreement with matching of items to an indicator did not fall along academic/community lines. There was only one match of an item to an indicator that was retained where two out of the three community raters disagreed with the academic remainder raters. For the rest of the items retained in the matching process, two or more community partners agreed with the academic partners. No items had all three community members disagreeing with the rest of the raters. We saw this as strength in the process for communication and common views. We had one example of where an agreed upon definition of an indicator construct might have likely yielded different results was observed in the matching process for “increased civility.” For this indicator, two academic team members consistently disagreed with the rest of the raters on 43 items. The decision rule of five out of seven agreement maintained that the 43 items could not be rejected, but two important points emerged. First, no other indicator had 51 items to be reviewed for matching. Second, through discussion, we assessed that the two dissenters were clearly defining the construct in a different way than the rest of the team. Keeping true to the established process required keeping the results as is, but it became clear that this was one construct where further work was needed.

DISCUSSION

The first phase described here in the process of creating measures of community-defined indicators of success places emphasis on community participation and existing available data. A strength is that this process emphasized how academic researchers and community leaders can collaboratively work together to create measures of locally meaningful outcomes that meet established standards of evidence without adding to the research burden of participants. Both community and academic researchers participated in all stages of planning and reviewing, and community researchers had decision making power equal to the academic researchers.

LESSONS LEARNED

The process described demonstrates several key areas where evaluation research can further the goals of CBPR. First, the process demonstrated that academic and community researchers can be well-aligned in interpretation and decision making within the research process. The process by which community views were “matched” to available data can have implications beyond violence prevention intervention, with potential for application to health outcomes of individuals and communities across the lifespan. This observation encourages additional similar processes as we described here, where community partners function fully within the research process. Second, there was good congruence in the agreement patterns even without definitions of the community-defined indicators. The consistent pattern of agreement between academic and community raters suggests that the raters may have had the same interpretation of each community-defined indicator. Defining the indicators and creating formal measures is an iterative process, and we feel that the results presented here are merely one of those iterations. Last, not all indicators were matched with items, and not all items were matched with an indicator. Thus, we acknowledge that some new measurement tools might need to be developed to fully capture community-defined constructs.

Another important lesson in our study was that the process provided a way to put CBPR into action in a proscribed manner. Following the procedures for the matching, academic partners in the research team further learned the importance of evaluating items reflecting community voice. In turn, community leaders were exposed to the systematic research process, which will help them in the future be more active consumers and advocates of research. The community partners in the PCVPC from PARCC came to the “research table” with a structure, support, and experience that not all community partners may have. Involvement with community partners who are not as familiar with the research process requires more time for establishing trust, communication, research goals, and a commitment to the process. We recognize that many of the community and academic partners had experience in CBPR and this added strength to the process.

Potential Limitations

We recognize the limitations to our process. First, the reliance on existing instruments used by the PARTNERS evaluation limited the number of available pool of items. Second, all eight raters did not review each item from all of the standardized instruments. Although the first rater purposefully erred on the side of inclusion rather than exclusion, there may be items that the first reviewer did not include for further review by the group that others may have included. Additionally, because it was an academic team member who did the first pass, it might introduce an academic bias to the process. There might also have been even more uniformity in agreement with proposed matches if standardized definitions of the community-defined indicators were provided at the beginning, rather than leaving interpretation to the raters. However, there were patterns of high agreement among raters and results indicating acceptable internal consistency, even without standard definitions.

Important Next Steps

We proposed important next steps in this process (steps 4 and 5 in Figure 1). We need to conduct statistical analysis to assess the validity and reliability of the new indices derived from the matched items from the different established scales. Statistical evaluation (validity and reliability) of the new indices will likely refine the number of items that match to indicators. We also need feedback from the larger community to strengthen the process and validity of the work.

Because we used tools that are being used to collect data in the context of the PARTNERS evaluation, we will have access to data with which to conduct these analyses. We are careful not to interpret the results of our matching as a reflection on the valid, standardized instruments and the constructs they were originally designed to measure. The instruments were originally developed to measure specific constructs in youth development and are in fact of interest to the PARTNERS intervention. We will have the opportunity to analyze the new item configurations and evaluate consistencies and differences in the data between the original scales in PARTNERS and the items that comprise the new indices of the community-identified indicators.

Results of the entire matching process (to public data [step 2] and to PARTNERS evaluation data [step 3]) yielded six community-defined indicators that were matched to both public and PARTNERS data (increased civility, future orientation, academic performance, helping others, decreased truancy, and more residents participating in community organizations). This presents an additional area of evaluation in future phases, where we can compare and contrast the community-defined indicators based on data from these different sources.

Recommendations for Others Looking to Replicate This or a Similar Process

Although the community-defined indicators of violence prevention programmatic success may not yet be considered universal and the availability of data will certainly vary by context, the process of reviewing and comparing community-defined indicators to academically defined outcomes and measures is informative and provides an opportunity to engage community members in the evaluation research process. The lessons learned from our experience encourage replication of the process in other communities and intervention program contexts and demonstrate how community voice can be woven into evaluation science in meaningful and important ways. Table 1 highlights recommendations based on our success and lessons learned.

CONCLUSION

The process described here capitalizes on collected data to meet the voiced needs of the community to have locally relevant indicators of program success available. The matching process linking community identified indicators with survey items from established standardized measures used in a violence prevention intervention integrated continuous community and academic feedback. Community members were involved in every step of the research process. This method not only avoided additional participant burden and conserved limited financial resources, but also sought to increase return on investment for the community by using available and accessible data. As such, we worked toward a mutual goal of meeting both academic and community needs for evaluative information. Emphasizing the use of existing and accessible data also increases community capacity to evaluate programs and address local information needs. This process should be broadened beyond youth violence prevention to other forms of interventions relevant to local communities. This innovation will improve the capacity of program evaluations to address community interests and help build support for sustainability.

This manuscript was supported by the cooperative agreement number 5 U49 CE001093 (PI; Fein) from The Centers for Disease Control and Prevention. Its contents are the sole responsibility of the authors and do not represent the official position of the Centers for Disease Control and Prevention. This research was also supported by Award Number F31NR011107 (PI McDonald) from the National Institute of Nursing Research, and Award Number T32NR007100 (PI Sommers) from the National Institute of Nursing Research.

Our collaborator Dr. Thomas TenHave passed away May 2011 and will be greatly missed for his wisdom, kind patience, and mentorship. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Nursing Research or the National Institutes of Health. We would also like to thank Saburah Abdul-Kabir, Melanie Freedman, Kevin Giangrasso, Wanda Moore, Kate Radetich, Chris Reiger, Brenda Rochester, Maurice Stewart, Kim Wilson, and Crystal Wyatt for their contributions to the study.

JacksonCAPitkinJKingtonREvidence-based decision making for performance monitoringSanta Monica (CA)RAND1998HausmanAJImplications of evidence based practice in community healthAm J Commun Psychol200230453467FeinbergMEBonempoDEGreenbergMTPredictors and level of sustainability of community prevention coalitionsAm J Prev Med20083449550118471585WallersteinNBDuranBUsing community-based participatory research to address health disparitiesHealth Promot Pract2006731232316760238GreenLDanielMNowickLPartnerships and coalitions for community-based researchPublic Health Report20011162031HausmanAJBeckerJABrawerRIdentifying value indicators and social capital in community health partnershipsJ Commun Psychol200533691704FranciscoVTButterfossFDHow do we know if we are making a difference with our program or community initiative?Health Promot Pract2003436737014611021JohnsonJCHaydenUTThomasNBuilding community participatory research coalitions from the ground up: The Philadelphia Area Research Community CoalitionProg Community Health Partnersh20093617220208302Community PartnersPhiladelphia Area Research Community Coalition[cited 2012 Mar 25]. Available from: http://phillyviolenceprevention.org/community-partners/LeffSSThomasDEVaughnNAUsing community-based participatory research to develop the PARTNERS youth violence prevention programProg Community Health Partnersh2010420721720729611Philadelphia Collaborative Violence Prevention CenterOverview: Who we are[Cited 2010 Feb 22]. Available from: http://stokes.chop.edu/programs/pcvpc/overview-who-we-are/HausmanAJBeckerJUsing participatory research to plan evaluation in violence preventionJ Public Health Manage Pract20001331340HausmanAJHohlBHanlonATranslating community-specified indicators of program success into measurable outcomesJ Public Health Manage Pract200915e22e30Litter Measurement and Technical Committee[updated 2006; cited 2010 Feb 6]. Available from: http://www.kab.org/site/PageServer?pagename=Litter_index_mainAchenbachTMManual for the Youth Self-Report and 1991 profileBurlington (VT)University of Vermont, Department of Psychiatry1991ShoemakerALConstruct validity of area specific self-esteem: The Hare Self-Esteem ScaleEducational and Psychological Measurement198040495501EssauCSasagawaSFrickPPsychometric properties of the Alabama Parenting QuestionnaireJ Child Fam Stud200615595614CrickNRThe role of overt aggression, relational aggression, and prosocial behavior in the prediction of children’s future social adjustmentChild Dev199667231723279022243EybergSPincusDEyberg Child Behavior Inventory & Sutter-Eyberg Student Behavior Inventory—RevisedOdessa (FL)Psychological Assessment Resources1999LeffSSCrickNRAngelucciJSocial cognition in context: Validating a cartoon-based attributional measure for urban girlsChild Dev2006771351135816999803OmanRFVeselySKMcLeroyKRReliability and validity of the youth asset survey (YAS)J Adolesc Health20023124725512225737MacEvoyJPThomasNALeffSSThe learning to lead measurePhiladelphiaThe Children’s Hospital of Philadelphia2009LeffSSCassanoMMacevoyJPCostiganTEInitial validation of a knowledge-based measure of social information processing and anger managementJ Abnormal Child Psychol20103810071020BosworthKEspelageDDuBayTDaytnerGKarageorgeKPreliminary evaluation of a multimedia violence prevention program for adolescentsAm J Health Behav200024268280CrickNRGrotpeterJKRelational aggression, gender, and social-psychological adjustmentChild Dev1995667107227789197PelhamWEJrMilichRMurphyDAMurphyHANormative data on the IOWA Conners Teacher Rating ScaleJ Clin Child Psychol198918259262
Process of Matching Community Identified Indicators to Data Sources

* The results of Steps 1 and 2 are described more fully in Hausman and colleagues.13

Recommendations for Indicator Matching

Initial Steps in the Process of Indicator Matching

The research team should have academic and community partners with experience working together in program planning, implementation, or research projects

Identify definitions of indicators though methods that engage the community members (e.g. focus groups) and verify through community feedback

Attempt to use existing data (public or primary data collection already in place) that does not add to participant burden; verify any matches with community members

Develop a team of academic and community partners willing to engage in an exercise of communication and room for agreement and disagreement for matching data to indicators

Specific to the Rating Process

Provide definitions for community-identified indicators to the matching team; engage in a discussion about the definitions prior to matching process

Have a subsample of academic and community partners rate initial agreement

Have the remainder of the academic-community partner team rate their matching agreement

Develop a matching threshold (e.g., 5 out of 7 agree) that will not allow a data item to match to an indicator if all community partners disagree with the matching

Next Steps Once Indicators Are Matched

Close the feedback loop and bring data from the matching-process back to the larger community

Consider assessing the reliability of any new survey item configuration, or otherwise acknowledge the deviation from any standardized scale.

Consider that new measurement tools might need to be developed to fully capture community-defined constructs.