RSNA 2020

Abstract Archives of the RSNA, 2020


SSRO04-03

Deep Learning-based Breast and Organs-at-risk Segmentation in CT with Uncertainty Quantification for Radiation Therapy after Breast-Conserving Surgery

Sunday, Nov. 29 2:00PM - 3:00PM Room: Channel 5



Participants
Jimin Lee, Seoul, Korea, Republic Of (Presenter) Nothing to Disclose
Hyungjoo Cho, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Sung-Joon Ye, PhD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Doo Ho Choi, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Won Park, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Haeyoung Kim, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Won Kyung Cho, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Hee Jung Kim, PHD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose

For information about this presentation, contact:

heejung0228.kim@samsung.com

PURPOSE

To investigate a novel Deep Learning-based breast and organs-at-risk (OARs) segmentation technique in CT with uncertainty quantification for radiation therapy after breast-conserving surgery

METHOD AND MATERIALS

The whole dataset consisted of CT images and RT structure files (RS) of 400 patients who underwent radiation therapy after breast-conserving surgery. The dataset was randomly divided into 320 patients' data for training stage and the other 80 patients' data for test stage. Each 3D segmentation map (used as a target) was acquired from contours of breast and three OARs, left lung, right lung, and heart from RS reviewed by four radiation oncologists independently.To develop a Deep Learning-based auto-segmentation technique, a published 3D Convolutional Neural Network architecture (SCNAS-Net) optimized by Scalable Neural Architecture Search (SCNAS) was implemented. In the training stage, the model was trained to classify voxels of CT volume into one of four organs under 5-fold cross-validation (CV).To evaluate the segmentation performance of the trained SCNAS-Net, dice coefficients (DICE) between the outputs from the model and the targets were calculated during the test stage. In addition, we applied test time augmentation and deep ensemble to capture the data and model uncertainty respectively.

RESULTS

The trained SCNAS-Net showed superior segmentation performance for all organs. Average DICEs of five trained models from the 5-fold CV for the breast, left lung, right lung, and heart were 0.8370, 0.9801, 0.9749, and 0.9340, respectively. Although the segmentation task was much more difficult due to the various shape, volume, and location of the breast, the average DICE for the breast was significantly high. For the uncertainty quantification, the results mainly focused on the unseen or rare cases such as the presence of the heart contour. These results could improve the safety when applying our proposed method in the real clinic, by not only providing the auto-segmentation results, but also providing a mechanism to quantify risks.

CONCLUSION

The novel Deep Learning-based breast and OARs segmentation technique has been successfully developed and shown superior segmentation performance with uncertainty quantification.

CLINICAL RELEVANCE/APPLICATION

The proposed breast and OARs segmentation technique in CT can effectively reduce the time for the manual contouring of breast and OARs during radiation therapy planning.

Printed on: 03/01/22