RSNA 2018

Abstract Archives of the RSNA, 2018


SSA12-05

Differentiation of Hepatic Masses in Abdominal CT Images Using Texture-Aware Convolutional Neural Networks with Texture Image Patches

Sunday, Nov. 25 11:25AM - 11:35AM Room: S406B



Participants
Hansang Lee, Daejeon, Korea, Republic Of (Presenter) Nothing to Disclose
Helen Hong, PhD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Heejin Bae, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Sungwon Kim, MD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Joonseok Lim, MD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Junmo Kim, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose

For information about this presentation, contact:

hlhong@swu.ac.kr

hlhong@swu.ac.kr

CONCLUSION

Our method can be applied to the differentiation of various subtypes of hepatic masses including cyst and hemangioma, and early diagnosis of hepatic cancer.

Background

Differentiation of hepatic masses into benign and malignant classes in CT images is an important task for early diagnosis and surgical decision of hepatic cancer. In the cases of small masses, acquisition of intensity and texture features is difficult, making the differentiation challenging. Thus, we propose a deep convolutional neural network (CNN) classification of hepatic masses using texture image patch (TIP) generation to enhance the classification efficiency in small masses.

Evaluation

Our method was evaluated on a dataset consisting of 349 abdominal CT scans including 576 benign and 210 malignant masses. Each mass was manually segmented by the radiologist. In TIP generation, the patches representing only the internal texture of the masses were created by filling the square patch with the segmented mass regions repeatedly. These TIPs have the effect of reflecting the texture information to CNN regardless of the original size of masses. Using these TIPs, the transfer learning (TL) was performed on the ImageNet pre-trained AlexNet to classify the patches into benign or malignant classes. To improve the classification efficiency, we re-trained the random forest (RF) classifier on the deep features extracted from the last feature layer of TL-AlexNet. In experiments, our framework was trained on 390 images(b282, m108), validated on 160 images(b113, m47), and tested on 236 images(b181, m55). The proposed method achieved the accuracy of 87.7% where the comparative methods achieved the accuracies of 83.5%, 80.1%, and 85.2%, without TIP, TL, and RF, respectively.

Discussion

Our TIPs improve the learning efficiency of CNN by augmenting the texture information of small masses and allowing the CNN to focus on the texture information. The TL also plays an important role in learning important imaging features for differentiating the hepatic masses. Instead of obtaining the CNN-classified outputs, re-training the RF classifier on the deep features improves the specificity of the proposed method by enhancing the malignancy detection.