RSNA 2016

Abstract Archives of the RSNA, 2016


QS117-ED-TUA2

Improvement in Clinical Decision Support Coverage and Reduction in "Unscored" Examination through a PDSA project

Tuesday, Nov. 29 12:15PM - 12:45PM Room: QS Community, Learning Center Station #2



Maitraya K. Patel, MD, Sylmar, CA (Presenter) Nothing to Disclose
John F. Brunner, MD, Los Angeles, CA (Abstract Co-Author) Nothing to Disclose
Orest B. Boyko, MD, PhD, Los Angeles, CA (Abstract Co-Author) Speakers Bureau, Bracco Group; Speakers Bureau, Guerbet SA
Jennifer Sayles, MD, Los Angeles, CA (Abstract Co-Author) Nothing to Disclose
PURPOSE

A clinical decision support (CDS) platform that includes the American College of Radiology Appropriateness criteria was implemented at our institution for RF, CT, US, MR, NM and MG examinations. CDS requires providers answer 1-2 questions when ordering imaging studies, then assigns a score – usually not appropriate (Low, 1-3), may be appropriate (Medium, 4-6), and usually appropriate (High, 7-9). During this implementation, “hard stops” were not included in the decision support process. This resulted in many orders triggering decision support, yet remaining “unscored.” A PDSA (Plan-Design-Study-Act) project was initiated analyzing these “unscored” orders and solutions implemented to reduce the number of unscored studies thus improving CDS “coverage.”

METHODS

Exams triggering CDS were scored as follows: High (score 7-9), Medium (Score 4-6), Low (score 1-3). “Recommendation provided” was a category used when recommendations were provided by the software, but the provider chose a procedure that was not scored. “Unscored” examinations were those in which no CDS was able to be provided. “Coverage” was defined as the sum of (High + Medium + Low+ Recommendations provided + Cancelled exams) / Total orders. Unscored exams were a result of three causes – (1) “Custom indications” in which a provider entered an indication not included or selected from the CDS database, (2) “Logic gaps” were clinical indications without rules designed in CDS , commonly due to the fact that some American College of Radiology (ACR) Appropriateness Criteria do not provide recommendations for all variations of a given decision tree logic and (3) “Unanswered questions” due to some clinical indications allowing an optional element to question answering, which if unanswered resulted in no score.Periodic analysis was performed on CDS utilization from 8/7/2015 to 3/31/2016 to improve coverage of the software and reduce unscored exams. Custom indications were targeted by (1) development of an instructional video and reference card to the providers on proper utilization of the software, (2) identification of areas that would benefit from locally created consensus rules based on best-practice, or other available evidence-based guidelines (3) implementation of a user interface change with a “drop down” of the top indications chosen at our institution for the order selected, allowing the provider to simply “pick and click” instead of “type and search” and (4) utilizing provider incentives in locations where a Radiologist approval was required for an examination, particularly in the Emergency Department; if a provider selected exam resulted in a medium or high score, the exam would be automatically approved and the technologist would perform the exam. Logic gaps were targeted by creating decision support rules where ACR recommendations were not all encompassing for a given indication. Unanswered questions were not able to be modified, due to current limitations in the software. Finally, “Recommendations provided” was targeted by identifying procedures in the order catalog not currently mapped in CDS but equivalent to currently mapped exams, thus providing a score when these equivalent procedures were ordered.

RESULTS

At baseline for the 90-day period prior to 8/7/2015 there were 25.7% “high” orders (n=7864), 3.8% “medium” orders (n=1174), 2.1% “low” orders (n=633), 0.2% canceled orders (n=51), 12% “recommendations provided” orders (n=3690), and 56.2% “unscored” orders (n=17208). Following the interventions, in the 90-day period prior to 3/31/2016, there were 52% “high” orders (n=47177), 9.6% “medium” orders (n=8724), 6.1% “low” orders (n=5580), 0.8% canceled orders (n=716), 9% “recommendations provided” orders (n=8453) and 22.2% “unscored” orders (n=20154). CDS coverage increased from 43.8 % in 8/2015 to 77.5% in 3/2016. Order volumes increased during the observation period as additional facilities were brought onto the CDS platform. 

CONCLUSION

Through a PDSA process, we were able to improve coverage of CDS from 43.8% at baseline, to 76.7% at the end of the study period, through a reduction in unscored examinations from 56.2% at baseline to 22.2% at the end of the study period. Doing so improved the number of exams undergoing clinical decision support, providing appropriateness feedback to ordering providers at the time of order selection.