论文标题

轮廓变压器网络用于解剖结构的一声分割

Contour Transformer Network for One-shot Segmentation of Anatomical Structures

论文作者

Lu, Yuhang, Zheng, Kang, Li, Weijian, Wang, Yirui, Harrison, Adam P., Lin, Chihung, Wang, Song, Xiao, Jing, Lu, Le, Kuo, Chang-Fu, Miao, Shun

论文摘要

解剖结构的准确分割对于医学图像分析至关重要。通常通过有监督的学习方法实现最先进的精度,在这种方法中,以可扩展的方式收集必要的专家标记的图像注释仍然是主要障碍。因此,高度希望使用允许产生准确的解剖结构分割的注释效率方法。在这项工作中,我们提出了Contour Transformer Network(CTN),这是一种具有天然内置的人体机制的单发解剖分割方法。我们将解剖学分割作为轮廓演化过程,并通过图形卷积网络(GCN)对演变行为进行建模。训练CTN模型只需要一个标记的图像示例,并通过新引入的损失函数来利用其他未标记的数据,该功能衡量了轮廓的全局形状和外观一致性。关于四种不同解剖学的细分任务,我们证明了我们的单一学习方法显着优于非基于学习的方法,并竞争性地执行了最先进的完全监督的深度学习方法。通过最少的人体编辑反馈,可以进一步改进分割性能以超过完全监督的方法。

Accurate segmentation of anatomical structures is vital for medical image analysis. The state-of-the-art accuracy is typically achieved by supervised learning methods, where gathering the requisite expert-labeled image annotations in a scalable manner remains a main obstacle. Therefore, annotation-efficient methods that permit to produce accurate anatomical structure segmentation are highly desirable. In this work, we present Contour Transformer Network (CTN), a one-shot anatomy segmentation method with a naturally built-in human-in-the-loop mechanism. We formulate anatomy segmentation as a contour evolution process and model the evolution behavior by graph convolutional networks (GCNs). Training the CTN model requires only one labeled image exemplar and leverages additional unlabeled data through newly introduced loss functions that measure the global shape and appearance consistency of contours. On segmentation tasks of four different anatomies, we demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning methods. With minimal human-in-the-loop editing feedback, the segmentation performance can be further improved to surpass the fully supervised methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源