Developing a manually annotated clinical document corpus to identify phenotypic information for inflammatory bowel disease
Advanced Search
Select up to three search categories and corresponding keywords using the fields to the right. Refer to the Help section for more detailed instructions.

Search our Collections & Repository

All these words:

For very narrow results

This exact word or phrase:

When looking for a specific result

Any of these words:

Best used for discovery & interchangable words

None of these words:

Recommended to be used in conjunction with other fields

Language:

Dates

Publication Date Range:

to

Document Data

Title:

Document Type:

Library

Collection:

Series:

People

Author:

Help
Clear All

Query Builder

Query box

Help
Clear All

For additional assistance using the Custom Query please check out our Help Page

i

Developing a manually annotated clinical document corpus to identify phenotypic information for inflammatory bowel disease

Filetype[PDF-783.26 KB]



Details:

  • Alternative Title:
    BMC Bioinformatics
  • Description:
    Background

    Natural Language Processing (NLP) systems can be used for specific Information Extraction (IE) tasks such as extracting phenotypic data from the electronic medical record (EMR). These data are useful for translational research and are often found only in free text clinical notes. A key required step for IE is the manual annotation of clinical corpora and the creation of a reference standard for (1) training and validation tasks and (2) to focus and clarify NLP system requirements. These tasks are time consuming, expensive, and require considerable effort on the part of human reviewers.

    Methods

    Using a set of clinical documents from the VA EMR for a particular use case of interest we identify specific challenges and present several opportunities for annotation tasks. We demonstrate specific methods using an open source annotation tool, a customized annotation schema, and a corpus of clinical documents for patients known to have a diagnosis of Inflammatory Bowel Disease (IBD). We report clinician annotator agreement at the document, concept, and concept attribute level. We estimate concept yield in terms of annotated concepts within specific note sections and document types.

    Results

    Annotator agreement at the document level for documents that contained concepts of interest for IBD using estimated Kappa statistic (95% CI) was very high at 0.87 (0.82, 0.93). At the concept level, F-measure ranged from 0.61 to 0.83. However, agreement varied greatly at the specific concept attribute level. For this particular use case (IBD), clinical documents producing the highest concept yield per document included GI clinic notes and primary care notes. Within the various types of notes, the highest concept yield was in sections representing patient assessment and history of presenting illness. Ancillary service documents and family history and plan note sections produced the lowest concept yield.

    Conclusion

    Challenges include defining and building appropriate annotation schemas, adequately training clinician annotators, and determining the appropriate level of information to be annotated. Opportunities include narrowing the focus of information extraction to use case specific note types and sections, especially in cases where NLP systems will be used to extract information from large repositories of electronic clinical note documents.

  • Pubmed ID:
    19761566
  • Pubmed Central ID:
    PMC2745683
  • Document Type:
  • Collection(s):
  • Main Document Checksum:
  • File Type:

You May Also Like

Checkout today's featured content at stacks.cdc.gov