论文标题
学会从一个示例中准确细分解剖结构
Learning to Segment Anatomical Structures Accurately from One Exemplar
论文作者
论文摘要
关键解剖结构的准确分割是医学图像分析的核心。主要的瓶颈在于以可扩展的方式收集必要的专家标记的图像注释。允许在不使用大量完全注释的训练图像的情况下产生准确的解剖结构进行分割的方法非常需要。在这项工作中,我们提出了Contour Transformer网络(CTN)的新颖贡献,该贡献是一种单发解剖分段,包括天然内置的人体机制。分割是通过学习基于图形卷积网络(GCN)的轮廓演化行为过程来提出的。我们的CTN模型的培训只需要一个标记的图像示例,并通过新引入的损失函数来利用其他未标记的数据,这些损失功能衡量了轮廓的全局形状和外观一致性。我们证明,我们的一声学习方法极大地胜过非学习方法,并在最先进的完全监督的深度学习方法上竞争性能。通过最少的人体编辑反馈,可以进一步改善分割性能并针对观察者所需的结果量身定制。这可以促进临床医生设计的基于成像的生物标志物评估(以支持个性化的定量临床诊断),并且表现优于完全监督的基线。
Accurate segmentation of critical anatomical structures is at the core of medical image analysis. The main bottleneck lies in gathering the requisite expert-labeled image annotations in a scalable manner. Methods that permit to produce accurate anatomical structure segmentation without using a large amount of fully annotated training images are highly desirable. In this work, we propose a novel contribution of Contour Transformer Network (CTN), a one-shot anatomy segmentor including a naturally built-in human-in-the-loop mechanism. Segmentation is formulated by learning a contour evolution behavior process based on graph convolutional networks (GCNs). Training of our CTN model requires only one labeled image exemplar and leverages additional unlabeled data through newly introduced loss functions that measure the global shape and appearance consistency of contours. We demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning approaches. With minimal human-in-the-loop editing feedback, the segmentation performance can be further improved and tailored towards the observer desired outcomes. This can facilitate the clinician designed imaging-based biomarker assessments (to support personalized quantitative clinical diagnosis) and outperforms fully supervised baselines.