SSG13-07

Automatic Quantification of 3D Body Composition from Abdominal CT with an Ensemble of Convolutional Neural Networks

Tuesday, Dec. 3 11:30AM - 11:40AM Room: S502AB



Participants
Pim Moeskops, PhD, Utrecht, Netherlands (Presenter) Research funded, Quantib BV
Bob D. De Vos, MSc, Utrecht , Netherlands (Abstract Co-Author) Nothing to Disclose
Wouter B. Veldhuis, MD, PhD, Utrecht, Netherlands (Abstract Co-Author) Nothing to Disclose
Anne M. May, Utrecht, Netherlands (Abstract Co-Author) Nothing to Disclose
Sophie Kurk, Utrecht, Netherlands (Abstract Co-Author) Nothing to Disclose
Miriam Koopman, Utrecht, Netherlands (Abstract Co-Author) Nothing to Disclose
Pim A. De Jong, MD, PhD, Houten, Netherlands (Abstract Co-Author) Nothing to Disclose
Tim Leiner, MD, PhD, Utrecht, Netherlands (Abstract Co-Author) Speakers Bureau, Koninklijke Philips NV Research Grant, Bayer AG
Ivana Isgum, PhD, Utrecht, Netherlands (Abstract Co-Author) Research Grant, Pie Medical Imaging BV Research Grant, 3mensio Medical Imaging BV Research Grant, Koninklijke Philips NV

For information about this presentation, contact:

p.moeskops@quantib.com

PURPOSE

Analysis of body composition based on CT, primarily comprising quantification of fat and muscles, is an important prognostic factor in cardiovascular disease and cancer. However, manual segmentation is time consuming and in 3D practically infeasible. The purpose of this study is to investigate the use of a deep learning-based method for automatic segmentation of subcutaneous fat, visceral fat and psoas muscle from full abdomen CT scans.

METHOD AND MATERIALS

We included a dataset of 20 native CT scans of the entire abdomen (Siemens Somatom Volume Zoom / Siemens Somatom Definition, 120 kVp, 375 mAs, in-plane resolution 0.63-0.75 mm, slice thickness 5.0 mm, slice increment 5.0 mm). Trained observers defined the reference standard by voxel-wise manual annotation of subcutaneous fat, visceral fat and psoas muscle in all slices that visualize the psoas muscle. Images of 10 patients were used to train a dilated convolutional neural network with a receptive field of 131 × 131 voxels to distinguish between the three tissue classes. To ensure robust results, 5 different networks were trained and subsequently ensembled by averaging the probabilistic results. Voxels were assigned to the class with the highest probability. Images from the remaining 10 patients were used to evaluate the performance of the method. Performance was evaluated with Dice coefficients between the manual and automatic segmentations. Additionally, linear correlation coefficients (Pearson's r) were computed between the manual and automatic segmentation volumes.

RESULTS

The average Dice coefficients over 10 test scans were 0.89 0.02 for subcutaneous fat, 0.92 0.04 for visceral fat, and 0.76 0.05 for psoas muscle. At the L3 vertebrae level, the average Dice coefficients were 0.92 0.02 for subcutaneous fat, 0.93 0.05 for visceral fat, and 0.87 0.04 for psoas muscle. Pearson's r between the manual and automatic volumes were 0.996 for subcutaneous fat, 0.997 for visceral fat, and 0.941 for psoas muscle. On average, segmentation of a full scan was performed in about 15 seconds.

CONCLUSION

The results show that accurate fully automatic segmentation of subcutaneous fat, visceral fat and psoas muscle from full abdominal CT scans is feasible.

CLINICAL RELEVANCE/APPLICATION

The proposed method allows fast and fully automatic analysis of 3D body composition in abdominal CT that can aid in individualized risk assessment in cardiovascular disease and cancer.

Printed on: 10/29/20