论文标题
增强基于自我训练的无监督跨模式的跨模型和耳蜗分割的数据多样性
Enhancing Data Diversity for Self-training Based Unsupervised Cross-modality Vestibular Schwannoma and Cochlea Segmentation
论文作者
论文摘要
磁共振成像的前庭造型瘤(VS)和耳蜗的自动分割可以促进与治疗计划。无监督的分割方法已显示出令人鼓舞的结果,而无需耗时且费力的手动标记过程。在本文中,我们提出了一种在无监督的域适应设置中进行VS和耳蜗分割的方法。具体而言,我们首先开发了跨站点的跨模式未配对的图像翻译策略,以丰富合成数据的多样性。然后,我们设计了一种基于规则的离线增强技术,以进一步最大程度地减少域间隙。最后,我们采用一个自我训练的自我配置分割框架,以获得最终结果。在Crossmoda 2022验证排行榜上,我们的方法与平均骰子得分为0.8178 $ \ pm $ 0.0803和0.8433 $ \ pm $ 0.0293,取得了竞争性与耳蜗细分性能。
Automatic segmentation of vestibular schwannoma (VS) and cochlea from magnetic resonance imaging can facilitate VS treatment planning. Unsupervised segmentation methods have shown promising results without requiring the time-consuming and laborious manual labeling process. In this paper, we present an approach for VS and cochlea segmentation in an unsupervised domain adaptation setting. Specifically, we first develop a cross-site cross-modality unpaired image translation strategy to enrich the diversity of the synthesized data. Then, we devise a rule-based offline augmentation technique to further minimize the domain gap. Lastly, we adopt a self-configuring segmentation framework empowered by self-training to obtain the final results. On the CrossMoDA 2022 validation leaderboard, our method has achieved competitive VS and cochlea segmentation performance with mean Dice scores of 0.8178 $\pm$ 0.0803 and 0.8433 $\pm$ 0.0293, respectively.