论文标题
多模式超声中的乳腺癌分类自动加权
Auto-weighting for Breast Cancer Classification in Multimodal Ultrasound
论文作者
论文摘要
乳腺癌是女性最常见的侵入性癌症。除了主要的B模式超声筛选外,超声检查员还探索了多普勒,应变和剪切波弹性成像的包含,以提高诊断。但是,在所有类型的图像中识别有用的模式并权衡每种模式的重要性可以避免经验较低的临床医生。在本文中,我们首次探讨了一种结合四种类型的超声检查以区分良性和恶性乳房结节的自动方法。提出了一种新型的多模式网络,以及有希望的可学习性和简单性,以提高分类精度。关键是利用体重分享策略来鼓励模式之间的互动,并采用额外的跨模式目标来整合全球信息。与模型中每种模式的权重相反,我们将其嵌入了增强学习框架中,以端到端的方式学习这种加权。因此,该模型经过训练,可以在没有手工启发式的情况下寻求最佳的多模式组合。在数据集上评估所提出的框架包含1616组多模式图像。结果表明,该模型的分类精度为95.4%,这表明该方法的效率。
Breast cancer is the most common invasive cancer in women. Besides the primary B-mode ultrasound screening, sonographers have explored the inclusion of Doppler, strain and shear-wave elasticity imaging to advance the diagnosis. However, recognizing useful patterns in all types of images and weighing up the significance of each modality can elude less-experienced clinicians. In this paper, we explore, for the first time, an automatic way to combine the four types of ultrasonography to discriminate between benign and malignant breast nodules. A novel multimodal network is proposed, along with promising learnability and simplicity to improve classification accuracy. The key is using a weight-sharing strategy to encourage interactions between modalities and adopting an additional cross-modalities objective to integrate global information. In contrast to hardcoding the weights of each modality in the model, we embed it in a Reinforcement Learning framework to learn this weighting in an end-to-end manner. Thus the model is trained to seek the optimal multimodal combination without handcrafted heuristics. The proposed framework is evaluated on a dataset contains 1616 set of multimodal images. Results showed that the model scored a high classification accuracy of 95.4%, which indicates the efficiency of the proposed method.