论文标题

Covid-19 CT图像综合与条件生成对抗网络

COVID-19 CT Image Synthesis with a Conditional Generative Adversarial Network

论文作者

Jiang, Yifan, Chen, Han, Loew, Murray, Ko, Hanseok

论文摘要

2019年冠状病毒疾病(Covid-19)是一种持续的全球大流行,自2019年12月以来一直迅速扩散。实时逆转录聚合酶链反应(RRT-PCR)和胸部计算机断层扫描(CT)成像,两者都在COVID-19诊断中起重要作用。胸部CT成像为快速报告,低成本和高灵敏度提供了检测肺部感染的好处。最近,基于深度学习的计算机视觉方法已显示出在医学成像应用中使用的巨大希望,包括X射线,磁共振成像和CT成像。但是,培训深度学习模型需要大量数据,并且由于疾病的高感染率,在收集Covid-19 CT数据时,医务人员面临高风险。另一个问题是缺乏可用于数据标签的专家。为了满足COVID-19 CT成像的数据需求,我们提出了一种基于条件生成对抗网络的CT图像合成方法,该方法可以有效地生成高质量和现实的Covid-19 CT图像,以用于基于深度学习的医学成像任务。实验结果表明,所提出的方法用生成的COVID-19 CT图像优于其他最先进的图像合成方法,并指示各种机器学习应用程序(包括语义分割和分类)有希望。

Coronavirus disease 2019 (COVID-19) is an ongoing global pandemic that has spread rapidly since December 2019. Real-time reverse transcription polymerase chain reaction (rRT-PCR) and chest computed tomography (CT) imaging both play an important role in COVID-19 diagnosis. Chest CT imaging offers the benefits of quick reporting, a low cost, and high sensitivity for the detection of pulmonary infection. Recently, deep-learning-based computer vision methods have demonstrated great promise for use in medical imaging applications, including X-rays, magnetic resonance imaging, and CT imaging. However, training a deep-learning model requires large volumes of data, and medical staff faces a high risk when collecting COVID-19 CT data due to the high infectivity of the disease. Another issue is the lack of experts available for data labeling. In order to meet the data requirements for COVID-19 CT imaging, we propose a CT image synthesis approach based on a conditional generative adversarial network that can effectively generate high-quality and realistic COVID-19 CT images for use in deep-learning-based medical imaging tasks. Experimental results show that the proposed method outperforms other state-of-the-art image synthesis methods with the generated COVID-19 CT images and indicates promising for various machine learning applications including semantic segmentation and classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源