RSNA 2019

Abstract Archives of the RSNA, 2019


SSM14-02

Synthetic Training Data Augmentation for Assisting CT Liver Lesion Classification with Generative Adversarial Networks

Wednesday, Dec. 4 3:10PM - 3:20PM Room: E353C



Participants
Hansang Lee, Daejeon, Korea, Republic Of (Presenter) Nothing to Disclose
Helen Hong, PhD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Heejin Bae, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Sungwon Kim, MD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Joonseok Lim, MD, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose
Junmo Kim, Seoul, Korea, Republic Of (Abstract Co-Author) Nothing to Disclose

For information about this presentation, contact:

hlhong@swu.ac.kr

hlhong@swu.ac.kr

CONCLUSION

Our GAN-DA method has high potential to be applied to various medical image classification problems as well as liver lesion classification in CT images. (This work was supported by the NRF grant funded by the Korea government (MSIP) (2017R1D1A1B03029631))

Background

The small dataset problem due to the limited acquisition of medical images is one of the major challenges in deep learning-based medical image classification. Data augmentation (DA) through scaling and rotation of the training images have been performed to avoid the overfitting caused by the small dataset problem, but this conventional DA has limitations in diversifying data patterns and improving learning efficiency. Thus, we propose a generative adversarial network (GAN) based DA method and apply it to the deep learning classification of liver lesions in CT images to verify its performance.

Evaluation

Our method was evaluated on a dataset consisting of 502 abdominal CT scans including 676 cysts(C), 130 hemangiomas(H), and 484 metastases(M). Each lesion was contoured by the radiologist. To train a CNN classifier, the DA is performed to increase the amount of given training data to avoid overfitting. In classic DA, the augmented images were generated by randomly scaling or rotating an image patch. In GAN-DA, the augmented images were generated by training GAN on the given image patches to create synthetic training images. This GAN-DA can generate synthetic data with a novel pattern by combining the imaging characteristics of given images. The AlexNet CNN was then trained with the augmented training data to classify the unseen test data. In experiments, our method of combining two DA methods achieved the accuracy of 74.59% where the classic DA and GAN-DA each achieved the accuracies of 74.36% and 66.45%, respectively.

Discussion

In deep learning of small dataset, the classic DA plays a role in extending the amount of training data, but is limited to repeating the given pattern. The proposed GAN-DA can further complement the pattern distribution of given data by diversifying the data patterns. As a result, training CNN with combined two DA-generated data can achieve the most improved performance in liver lesion classification.

Printed on: 03/01/22