Welcome to CDC Stacks | A Quantitative Linguistic Analysis of National Institutes of Health R01 Application Critiques from Investigators at One Institution - 30216 | CDC Public Access
Stacks Logo
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.
 
 
Help
Clear All Simple Search
Advanced Search
A Quantitative Linguistic Analysis of National Institutes of Health R01 Application Critiques from Investigators at One Institution
Filetype[PDF - 368.73 KB]


Details:
  • Pubmed ID:
    25140529
  • Pubmed Central ID:
    PMC4280285
  • Funding:
    DP4 GM096822/DP/NCCDPHP CDC HHS/United States
    DP4 GM096822/GM/NIGMS NIH HHS/United States
    R01 GM088477/GM/NIGMS NIH HHS/United States
    R01 GM088477/GM/NIGMS NIH HHS/United States
    R01 GM111002/GM/NIGMS NIH HHS/United States
    R01 GM111002/GM/NIGMS NIH HHS/United States
    R25 GM083252/GM/NIGMS NIH HHS/United States
    R25 GM083252/GM/NIGMS NIH HHS/United States
  • Document Type:
  • Collection(s):
  • Description:
    Purpose

    Career advancement in academic medicine often hinges on the ability to garner research funds, and the National Institutes of Health’s (NIH’s) R01 award is the “gold standard” of an independent research program. Studies show inconsistencies in R01 reviewers’ scoring and in award outcomes for certain applicant groups. Consistent with the NIH recommendation to examine potential bias in R01 peer review, the authors performed a text analysis of R01 reviewers’ critiques.

    Method

    The authors collected 454 critiques (262 from 91 unfunded and 192 from 67 funded applications) from 67 of 76 (88%) R01 investigators at the University of Wisconsin-Madison with initially unfunded applications subsequently funded between December 2007 and May 2009. To analyze critiques the authors developed positive and negative grant application evaluation word categories and selected 5 existing categories relevant to grant review. The authors analyzed results with linear mixed effects models for differences due to applicant and application characteristics.

    Results

    Critiques of funded applications contained more positive descriptors and superlatives and fewer negative evaluation words than critiques of unfunded applications. Experienced investigators’ critiques contained more references to competence. Critiques showed differences due to applicant sex despite similar application scores or funding outcomes: more praise for applications from female investigators; greater reference to competence/ability for funded applications from female experienced investigators; and more negative evaluation words for applications from male investigators (Ps < .05).

    Conclusions

    Results suggest that text analysis is a promising tool for assessing consistency in R01 reviewers’ judgments and gender stereotypes may operate in R01 review.