AwardsStudent Travel Stipend Award
Manual and NLP searches for inpatient recommendations agree at a frequency sufficient to justify generation of an automated radiology discharge summary to avoid delayed or failed follow-up. NLP software accurately detects follow-up keywords in our test population of “long stay” inpatients. We are continuing to train, test and enhance the software to produce summaries containing pathologic findings, relevant anatomy, and recommendations.
BackgroundHospital inpatients are subjected to multiple radiographic examinations during single admissions with frequent recommendations for follow-up of incidental findings. Urgent requirements of the patients’ care may postpone follow-up during an inpatient stay and result in failure of follow-up. Inclusion of follow-up recommendations in the discharge summary may mitigate this risk. We are evaluating a natural language processing (NLP) software tool to produce a “radiology discharge summary” to facilitate follow-up of incidental findings.
EvaluationWe obtained 503 radiographic reports from our radiology report repository by randomly selecting 43 patients with at least 7 inpatient days. A physician manually annotated the reports by searching for expressions indicating the necessity for follow-up along with pertinent anatomy, details of pathologic findings and recommendations for follow-up. A list of keywords was extracted: recommend, correlate, follow-up, consider, advise, suggest, beneficial, could perform, further evaluation, can be obtained. We trained the NLP software using the annotated reports, keywords and keyword permutations.
DiscussionThere were 32 multi-study reports, resulting in 471 unique reports. 106 instances of the keywords were found in 61(13%) reports resulting in 79(17%) unique recommendations. 71(90%) recommendations were addressed during the inpatient stay or in the discharge summary. At this stage the NLP software identified 103(97%) keywords in the impression of the report. 3 keywords were not found as they were not included in the impression. NLP pattern recognition found relevant anatomy 28(53%) times in the keyword sentence, 12(23%) times in the sentence prior, and 9(17%) times prior to that. 4(8%) of relationships were misclassified.