论文标题

学习与多模式年龄相关的黄斑变性分类的两流CNN

Learning Two-Stream CNN for Multi-Modal Age-related Macular Degeneration Categorization

论文作者

Wang, Weisen, Li, Xirong, Xu, Zhiyan, Yu, Weihong, Zhao, Jianchun, Ding, Dayong, Chen, Youxin

论文摘要

本文解决了与年龄相关的黄斑变性(AMD)的自动分类,这是50岁以上的人群中的一种常见黄斑病。先前的研究工作主要集中在单模式输入的情况下,让它成为彩色底面照片(CFP)或OCT B-SCAN图像。相比之下,我们考虑了一个多模式输入的AMD分类,该方向在临床上有意义但尚未开发。与以前采用传统的特征提取方法加分类器训练的方法相反,我们选择端到端多模式卷积神经网络(MM-CNN)。我们的MM-CNN由两流CNN实例化,并具有空间不变的融合,以结合CFP和OCT流的信息。为了在视觉上解释各个模式对最终预测的贡献,我们将类激活映射(CAM)技术扩展到多模式方案。为了有效培训MM-CNN,我们开发了两种数据增强方法。其中一种是基于GAN的CFP/OCT图像合成,我们将CAM作为高分辨率图像到图像翻译的有条件输入GAN。另一种方法是松散的配对,将CFP图像和OCT图像配对,而不是眼睛身份。从1,094张CFP图像和1,289个从1,093个不同眼睛获得的临床数据集上的实验表明,与多模式AMD分类的多个基准相比,提出的溶液获得了更好的F1和准确性。代码和数据可在https://github.com/li-xirong/mmc-amd上找到。

This paper tackles automated categorization of Age-related Macular Degeneration (AMD), a common macular disease among people over 50. Previous research efforts mainly focus on AMD categorization with a single-modal input, let it be a color fundus photograph (CFP) or an OCT B-scan image. By contrast, we consider AMD categorization given a multi-modal input, a direction that is clinically meaningful yet mostly unexplored. Contrary to the prior art that takes a traditional approach of feature extraction plus classifier training that cannot be jointly optimized, we opt for end-to-end multi-modal Convolutional Neural Networks (MM-CNN). Our MM-CNN is instantiated by a two-stream CNN, with spatially-invariant fusion to combine information from the CFP and OCT streams. In order to visually interpret the contribution of the individual modalities to the final prediction, we extend the class activation mapping (CAM) technique to the multi-modal scenario. For effective training of MM-CNN, we develop two data augmentation methods. One is GAN-based CFP/OCT image synthesis, with our novel use of CAMs as conditional input of a high-resolution image-to-image translation GAN. The other method is Loose Pairing, which pairs a CFP image and an OCT image on the basis of their classes instead of eye identities. Experiments on a clinical dataset consisting of 1,094 CFP images and 1,289 OCT images acquired from 1,093 distinct eyes show that the proposed solution obtains better F1 and Accuracy than multiple baselines for multi-modal AMD categorization. Code and data are available at https://github.com/li-xirong/mmc-amd.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源