RSNA 2013 

Abstract Archives of the RSNA, 2013


LL-INS-MO5A

Quality Assurance Scoring of Computed Radiography Images: Comparison of Gray Scale and Color Monitors during Image Processing

Scientific Informal (Poster) Presentations

Presented on December 2, 2013
Presented as part of LL-INS-MOA: Informatics - Monday Posters and Exhibits (12:15pm - 12:45pm)

Participants

Regina Shirley RT, Abstract Co-Author: Nothing to Disclose
Eric Allan Brandser MD, Presenter: Nothing to Disclose
David Agard PhD, Abstract Co-Author: Nothing to Disclose
Carly Smith RT, Abstract Co-Author: Nothing to Disclose
Marcia Flaherty RT, Abstract Co-Author: Nothing to Disclose

PURPOSE

Techologists usually perform quality assurance (QA) at the acquiring workstation, using lower resolution color monitors compared to gray scale higher resolution monitors found on diagnostic workstations. We noticed that some computed radiographic (CR) images seemed adequate on the technologist workstation (TW) but not on a diagnostic workstation (DW). We wanted to test the effect of monitor type on image QA scoring by techologists.

METHOD AND MATERIALS

100 CR examinations performed at one institution were collected prospectively over a 5 day period.  All images were taken on a single sytem by two technologists not included in this study. Each case was reviewed by 3 radiology technologists twice. One viewing was on a gray scale Barco 3220D monitor (1536x2048) and the other on a color HP LA2206x monitor (1920x1080).  Both sytems used an HP 6700 tower, Windows XPpro, with McKesson HRS-A version 11.6 software.   Order of image viewing was randomized for each reviewer at each sitting with a two week delay between viewings to minimize case recall. The following grading system was used: 1= "should never pass", 2 = "passable/acceptable", and 3 = "no need for improvement/perfect". Factors reviewed were mottle, motion, density, and contrast. Positioning errors were not considered. 12 cases were then reviewed a second time on each system for intra observer agreement assessment. The scores were analyzed with a multifactor Analysis of Variance (ANOVA) procedure taking into account the effects for Monitor type, Evaluator, and image. The interaction between Monitor and Evaluator was also included in the model. Absolute agreement assessed on test/retest cases.  

RESULTS

The average quality score on the TW is significantly higher than for the DW sytem. (F=74.33, p = .012). There was no significant interaction between Monitor Type and Evaluator (F = 1.73, p = .178).  Monitor effect was constant across the 3 reviewers. There was a significantly higher intraclass agreement with the DW system.

CONCLUSION

There is a statistically significant difference for QA scores given by technologists for quality of CR images when viewed on a color monitor when compared to the gray scale diagnostic monitors. Precision was higher with the gray scale DW system.

CLINICAL RELEVANCE/APPLICATION

The addition of a gray scale monitor may improve the precision and accuracy of technologist assessment on image quality prior to submission for radiologist interpretation.

Cite This Abstract

Shirley, R, Brandser, E, Agard, D, Smith, C, Flaherty, M, Quality Assurance Scoring of Computed Radiography Images: Comparison of Gray Scale and Color Monitors during Image Processing.  Radiological Society of North America 2013 Scientific Assembly and Annual Meeting, December 1 - December 6, 2013 ,Chicago IL. http://archive.rsna.org/2013/13044220.html