论文标题
m-genseg:针对目标肿瘤分割的域适应性,并有效的监督
M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With Annotation-Efficient Supervision
论文作者
论文摘要
使用深神经网络的自动化医学图像细分通常需要进行大量监督培训。但是,这些模型无法跨不同的成像方式概括。由于专家注释的数据的可用性有限,这种缺点一直在妨碍跨模态以更大范围的这种方法的部署。为了解决这些问题,我们提出了M-Genseg,这是一种新的半监督生成培训策略,用于在未配对的双模式数据集上进行跨模式肿瘤分割。随着已知的健康图像的添加,无监督的目标鼓励该模型将肿瘤与背景相同,从而与分割任务相似。然后,通过教导模型将图像转换为跨模态,我们利用可用的像素级注释从源模态来启用未注释的目标模式中的分割。我们评估了脑肿瘤分割数据集的性能,该数据集由公共批评2020挑战数据中的四个不同的对比序列组成。我们报告了对未经注释的目标模式的最新域自适应基线的骰子得分的一致改善。与先前的艺术不同,M-Genseg还引入了以部分注释的源方式训练的能力。
Automated medical image segmentation using deep neural networks typically requires substantial supervised training. However, these models fail to generalize well across different imaging modalities. This shortcoming, amplified by the limited availability of expert annotated data, has been hampering the deployment of such methods at a larger scale across modalities. To address these issues, we propose M-GenSeg, a new semi-supervised generative training strategy for cross-modality tumor segmentation on unpaired bi-modal datasets. With the addition of known healthy images, an unsupervised objective encourages the model to disentangling tumors from the background, which parallels the segmentation task. Then, by teaching the model to convert images across modalities, we leverage available pixel-level annotations from the source modality to enable segmentation in the unannotated target modality. We evaluated the performance on a brain tumor segmentation dataset composed of four different contrast sequences from the public BraTS 2020 challenge data. We report consistent improvement in Dice scores over state-of-the-art domain-adaptive baselines on the unannotated target modality. Unlike the prior art, M-GenSeg also introduces the ability to train with a partially annotated source modality.