ParticipantsZhiyang Fu, MENG, Tucson, AZ (Abstract Co-Author) Nothing to Disclose
Hsin Wu Tseng, PHD, Tucson, AZ (Abstract Co-Author) Nothing to Disclose
Srinivasan Vedantham, PhD, Tucson, AZ (Presenter) Research collaboration, Koning Corporation; Research collaboration, General Electric Company
Andrew Karellas, PhD, Tucson, AZ (Abstract Co-Author) Nothing to Disclose
Ali Bilgin, Tucson, AZ (Abstract Co-Author) Nothing to Disclose
svedantham@radiology.arizona.edu
PURPOSETo objectively quantify and demonstrate the feasibility of deep learning-driven reconstruction for sparse-view dedicated breast CT (BCT) to reduce radiation dose and to identify the best method for reader study.
METHOD AND MATERIALSProjection datasets (300 views, full-scan; 12.6 mGy MGD) from 137 BIRADS 4/5 women who underwent BCT prior to biopsy were reconstructed using FDK algorithm (0.273 mm isotropic voxels) and served as reference. Sparse-view (100 views, full-scan; 4.2 mGy median MGD) projection data were reconstructed using FDK algorithm (0.273 mm isotropic voxels) and three variants of multiscale CNN (ResNet) architecture (individual 2D slices, "ResNet2D"; 5 contiguous 2D slices, "ResNet2.5D"; and, residual dense network with 5 contiguous 2D slices, "ResDenseNet2.5D") were used to train the network with sparse-view and reference FDK reconstructions as input and label, respectively. Each network used 2000/900/900 slices from 20/5/5 breasts for training/validation/testing. Once trained, 42868 slices from the remaining 107 breasts were used to quantify normalized mean-squared error (NMSE), bias and absolute bias, all with respect to the reference, and the standard deviation for all reconstructions.
RESULTSAll 3 deep learning methods suppressed streak artifacts and showed significantly reduced NMSE, bias and absolute bias compared to FDK reconstruction (p<0.001). The NMSE (mean+/-SD, log scale) was significantly lower for ResDenseNet2.5D (-2.59+/-0.27; p<0.001). The bias was lowest for ResNet2.5D (-3.05E-5+/-3.05E-4; p<0.001). The absolute bias was lowest for ResDenseNet2.5D (9.05E-4+/-3.51E-4; p<0.001). The standard deviation for each deep learning sparse-view reconstruction was lower than the reference 300-view FDK reconstruction as the CNN learns from the ensemble of breasts. The standard deviation in ResNet2.5D was lowest (3.67E-3+/-1.38E-3; p<0.001).
CONCLUSIONQuantitatively, ResNet architectures using multiple contiguous slices performed better than that using individual slices. Deep learning-driven sparse-view reconstruction for radiation dose reduction is feasible and needs to be investigated.
CLINICAL RELEVANCE/APPLICATIONDeep learning-driven sparse-view reconstruction can potentially enable radiation dose reduction in breast CT to a level that may be suitable for breast cancer screening.