RSNA 2016

Abstract Archives of the RSNA, 2016


SSG07-09

Development and Evaluation of Natural Language Processing Software to Produce a Summary of Inpatient Radiographic Findings Identified for Follow-Up

Tuesday, Nov. 29 11:50AM - 12:00PM Room: S402AB



Awards
Student Travel Stipend Award

Ian R. Whiteside, MD , Stony Brook, NY (Presenter) Nothing to Disclose
Iv Ramakrishnan, PhD, Stony Brook, NY (Abstract Co-Author) Nothing to Disclose
Ritwik Banerjee, PhD, Stony Brook, NY (Abstract Co-Author) Nothing to Disclose
Vasudev Balasubramanian, Stonybrook, NY (Abstract Co-Author) Nothing to Disclose
Basava Raju Kanaparthi, Stony Brook, NY (Abstract Co-Author) Nothing to Disclose
Matthew A. Barish, MD, Stony Brook, NY (Abstract Co-Author) Nothing to Disclose
CONCLUSION

Manual and NLP searches for inpatient recommendations agree at a frequency sufficient to justify generation of an automated radiology discharge summary to avoid delayed or failed follow-up. NLP software accurately detects follow-up keywords in our test population of “long stay” inpatients. We are continuing to train, test and enhance the software to produce summaries containing pathologic findings, relevant anatomy, and recommendations.

Background

Hospital inpatients are subjected to multiple radiographic examinations during single admissions with frequent recommendations for follow-up of incidental findings. Urgent requirements of the patients’ care may postpone follow-up during an inpatient stay and result in failure of follow-up. Inclusion of follow-up recommendations in the discharge summary may mitigate this risk. We are evaluating a natural language processing (NLP) software tool to produce a “radiology discharge summary” to facilitate follow-up of incidental findings.

Evaluation

We obtained 503 radiographic reports from our radiology report repository by randomly selecting 43 patients with at least 7 inpatient days. A physician manually annotated the reports by searching for expressions indicating the necessity for follow-up along with pertinent anatomy, details of pathologic findings and recommendations for follow-up. A list of keywords was extracted: recommend, correlate, follow-up, consider, advise, suggest, beneficial, could perform, further evaluation, can be obtained. We trained the NLP software using the annotated reports, keywords and keyword permutations.

Discussion

There were 32 multi-study reports, resulting in 471 unique reports. 106 instances of the keywords were found in 61(13%) reports resulting in 79(17%) unique recommendations. 71(90%) recommendations were addressed during the inpatient stay or in the discharge summary. At this stage the NLP software identified 103(97%) keywords in the impression of the report. 3 keywords were not found as they were not included in the impression. NLP pattern recognition found relevant anatomy 28(53%) times in the keyword sentence, 12(23%) times in the sentence prior, and 9(17%) times prior to that. 4(8%) of relationships were misclassified.