论文标题
HATNET:用于诊断乳房活检图像的端到端整体注意力网络
HATNet: An End-to-End Holistic Attention Network for Diagnosis of Breast Biopsy Images
论文作者
论文摘要
用于分类Gigapixel尺寸组织病理学图像的训练端到端网络在计算上是棘手的。大多数方法是基于补丁的,并首先了解本地表示(补丁),然后再将这些本地表示形式结合起来制定图像级别的决策。但是,将大型组织结构分为斑块限制了这些网络可用的环境,这可能会降低其从临床相关结构中学习表示形式的能力。在本文中,我们介绍了一个新型的基于注意力的网络,即整体注意力网络(HATNET),以对乳房活检图像进行分类。我们简化组织病理学图像分类管道,并展示如何从gigapixel大小端到端学习表示形式。 Hatnet扩展了单词袋的方法,并使用自我注意来编码全球信息,从而使其可以从临床相关的组织结构中学习表示,而无需任何明确的监督。它的表现优于以前的最佳网络Y-NET,该网络使用以组织级分割面罩的形式使用监督,高达8%。重要的是,我们的分析表明,Hatnet从临床上相关的结构中学习了表示形式,并且与人类病理学家的分类准确性相匹配。我们的源代码可在\ url {https://github.com/sacmehta/hatnet}中获得
Training end-to-end networks for classifying gigapixel size histopathological images is computationally intractable. Most approaches are patch-based and first learn local representations (patch-wise) before combining these local representations to produce image-level decisions. However, dividing large tissue structures into patches limits the context available to these networks, which may reduce their ability to learn representations from clinically relevant structures. In this paper, we introduce a novel attention-based network, the Holistic ATtention Network (HATNet) to classify breast biopsy images. We streamline the histopathological image classification pipeline and show how to learn representations from gigapixel size images end-to-end. HATNet extends the bag-of-words approach and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision. It outperforms the previous best network Y-Net, which uses supervision in the form of tissue-level segmentation masks, by 8%. Importantly, our analysis reveals that HATNet learns representations from clinically relevant structures, and it matches the classification accuracy of human pathologists for this challenging test set. Our source code is available at \url{https://github.com/sacmehta/HATNet}