RSNA 2008 

Abstract Archives of the RSNA, 2008


LL-IN2088-L08

An Online Peer Reviewing System for Evaluating the Accuracy of Interpretation of Radiology Reports: Data Analysis for the Initial 6 Months

Scientific Posters

Presented on December 3, 2008
Presented as part of LL-IN-L: Informatics

 Research and Education Foundation Support

Participants

Chun-Shan Yam PhD, Presenter: Nothing to Disclose
Jonathan B. Kruskal MD, PhD, Abstract Co-Author: Nothing to Disclose
Katherine Margaret Krajewski MD, Abstract Co-Author: Nothing to Disclose
Kei Yamada MD, Abstract Co-Author: Nothing to Disclose

CONCLUSION

We followed the guidelines proposed by the ABR and developed an online system for the Accuracy of Interpretation PQI project. The first 6-month results show significant agreement with the national statistics.

BACKGROUND

In the article “The American Board of Radiology Perspective on Maintenance of Certification: Part IV-Practice Quality improvement in Diagnostic Radiology” [Strife, et al. AJR:188, 2007], the ABR encourages each board-certified diagnostic radiologist to understand his or her professional responsibilities and to participate in continuous quality improvement. This initiative is also known as “Practice Quality improvement” (PQI), a means to provide evidence of critical evaluation of the individual’s performance in practice. One of the PQI projects proposed by the ABR is the "Accuracy of Interpretation" (AI) . In our institution, a Quality Assurance (QA) team was charged to design and implement all of these PQI projects.

EVALUATION

The AI project was designed and implemented between the radiology QA team and the hospital information system. The web-based user interface allows radiologists to submit their peered rating (i.e., 4-point RadPeer scale: [1] Concur with interpretation or find only minor differences; [2] Disagree: difficult diagnosis, not ordinarily expected to be made; [3] Disagree: diagnosis should be made most of the time; [4] Disagree: diagnosis should be made almost every time) after reading a previous examination by simply entering the accession number. We have just passed the first 6 months and we are using the collected submission data, user feedback and system information to review and to improve the AI project.

DISCUSSION

During the first 6 months of implementation, 9930 submissions were entered into the system by 59 attending radiologists from 7 specialty sections. The submission data (Rating: [1] 95.22%, [2] 3.07%, [3] 1.36% and [4] 0.35%) was benchmarked with the national data with statistically significant agreement.

Cite This Abstract

Yam, C, Kruskal, J, Krajewski, K, Yamada, K, An Online Peer Reviewing System for Evaluating the Accuracy of Interpretation of Radiology Reports: Data Analysis for the Initial 6 Months.  Radiological Society of North America 2008 Scientific Assembly and Annual Meeting, February 18 - February 20, 2008 ,Chicago IL. http://archive.rsna.org/2008/6021632.html