ParticipantsWilliam Lotter, PhD, Cambridge, MA (Presenter) Officer, DeepHealth Inc
Giorgia Grisot, PhD, Boulogne-Billancourt, France (Abstract Co-Author) Employee, DeepHealth, Inc
A. Gregory Sorensen, MD, Belmont, MA (Abstract Co-Author) Employee, DeepHealth, Inc Board member, IMRIS Inc Board member, Siemens AG Board member, Fusion Healthcare Staffing Board member, DFB Healthcare Acqusitions, Inc Board member, inviCRO, LLC
lotter@deep.health
PURPOSESynthetic 2D images constructed from digital breast tomosynthesis (DBT) data are a useful tool for summarizing the information across the DBT planes. However, the default synthetic 2D images included in many systems may not summarize the most important information for cancer detection. We use machine learning (ML) to construct optimized 2D images from DBT slices, and subsequently used these images for additional ML training and testing.
METHOD AND MATERIALSThe data used consisted of three sets: Set A - 12000 2D FFDM cases (4500 proven cancers) from two sites; Set B - 22000 DBT cases (300 proven cancers) with default synthetic 2D from a site in Rhode Island; Set C - 1000 screening DBT cases (100 proven cancers) with default synthetic 2D from a site in Oregon. First, a 2D deep learning model was trained on Set A, then used to create ROI-optimized 2D images using a proprietary method on the DBT cases in Set B. The 2D model was then fine-tuned on these novel synthetic images in Set B, as well as also being separately fine-tuned on the default synthetic 2D images in this set. Finally, we tested the resulting models on Set C, where the ROI-optimized 2D images were created in the same manner as those created in Set B. Importantly, neither the model used to create the optimized images nor the model trained to classify these images were trained on data from Set C. Performance on the task of determining the presence or absence of proven cancer was quantified using the AUROC.
RESULTSThe model trained and tested on the ROI-optimized 2D images obtained a significantly higher AUROC than the model trained and tested on the default synthetic 2D images (0.93 vs. 0.90, p<0.05). Combining the predictions of both models resulted in an even greater performance of 0.95 AUROC (p<0.05), corresponding to a sensitivity of 91% at 89% specificity.
CONCLUSIONUsing deep learning to create a ROI-optimized 2D image from DBT images can lead to a higher performance in AI-based classification than the default synthetic 2D image. The images are complementary: combining predictions on both versions leads to even higher performance.
CLINICAL RELEVANCE/APPLICATIONAI has the potential to improve screening mammography performance, especially for DBT. These results suggest that creating optimized 2D images from DBT using ML can be an effective strategy for ML-based classification. Future work will investigate if these optimized synthetics are useful for human readers.