论文标题
点击注释:医学语义分段的一种监督对比方差方法
Annotation by Clicks: A Point-Supervised Contrastive Variance Method for Medical Semantic Segmentation
论文作者
论文摘要
医疗图像分割方法通常依靠许多密集的注释图像进行模型训练,众所周知,这些图像昂贵且耗时。为了减轻这一负担,弱监督的技术被利用为培训较便宜的注释的细分模型。在本文中,我们为医学图像语义分割提出了一种新颖的监督对比方差方法(PSCV),该方法仅需要每个器官类别中的一个像素点。所提出的方法通过使用新型的对比方差(CV)损失来利用未标记的像素和标记像素的部分跨透镜损失来训练基础分割网络。 CV损耗函数旨在利用器官在医学图像中的统计空间分布特性及其方差分布图表示,以在未标记的像素上执行判别性预测。两个标准医疗图像数据集的实验结果表明,该提出的方法在点诉医学图像语义分段任务上的最新方法优于最新的弱监督方法。
Medical image segmentation methods typically rely on numerous dense annotated images for model training, which are notoriously expensive and time-consuming to collect. To alleviate this burden, weakly supervised techniques have been exploited to train segmentation models with less expensive annotations. In this paper, we propose a novel point-supervised contrastive variance method (PSCV) for medical image semantic segmentation, which only requires one pixel-point from each organ category to be annotated. The proposed method trains the base segmentation network by using a novel contrastive variance (CV) loss to exploit the unlabeled pixels and a partial cross-entropy loss on the labeled pixels. The CV loss function is designed to exploit the statistical spatial distribution properties of organs in medical images and their variance distribution map representations to enforce discriminative predictions over the unlabeled pixels. Experimental results on two standard medical image datasets demonstrate that the proposed method outperforms the state-of-the-art weakly supervised methods on point-supervised medical image semantic segmentation tasks.