论文标题
使用多肺泡神经网络对红外乳房图像进行分割
Segmentation of Infrared Breast Images Using MultiResUnet Neural Network
论文作者
论文摘要
乳腺癌是美国女性的第二大死亡原因,早期发现乳腺癌是乳腺癌患者存活率更高的关键。我们正在研究红外(IR)热成像作为乳腺癌筛查乳腺X线摄影的无创辅助手段。红外成像是无辐射,无疼痛和非接触的。从获得的全尺寸乳房IR图像中自动分割乳房区域将有助于限制肿瘤搜索区域,并降低手动分割的时间和精力成本。在先前的研究中,已应用类似自动编码器的卷积和反向性神经网络(C-DCNN)在IR图像中自动分割乳房区域。在这项研究中,我们应用了最先进的深度学习分割模型MultireSunet,该模型由一个编码器组成,用于捕获特征和用于精确定位的解码器部分。它用于通过使用一组乳腺IR图像来分割乳房区域,这是通过对乳腺癌患者和使用热红外摄像机(N2成像器)的正常志愿者进行成像来收集的。我们使用的数据库有450张图像,是从14名患者和16名志愿者那里获得的。我们使用阈值方法来删除原始图像中的干扰,并将其从原始的16位重塑为8位,然后手动裁剪并分割了8位图像。使用Tanimoto相似性,使用剩余的交叉验证(LOOCV)和与地面真相图像进行比较的实验表明,多孔孔的平均准确性为91.47%,比自动编码器的平均精度高约2%。比我们以前的型号,多发性蒸发为细分乳腺IR图像提供了更好的方法。
Breast cancer is the second leading cause of death for women in the U.S. Early detection of breast cancer is key to higher survival rates of breast cancer patients. We are investigating infrared (IR) thermography as a noninvasive adjunct to mammography for breast cancer screening. IR imaging is radiation-free, pain-free, and non-contact. Automatic segmentation of the breast area from the acquired full-size breast IR images will help limit the area for tumor search, as well as reduce the time and effort costs of manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (C-DCNN) had been applied to automatically segment the breast area in IR images in previous studies. In this study, we applied a state-of-the-art deep-learning segmentation model, MultiResUnet, which consists of an encoder part to capture features and a decoder part for precise localization. It was used to segment the breast area by using a set of breast IR images, collected in our pilot study by imaging breast cancer patients and normal volunteers with a thermal infrared camera (N2 Imager). The database we used has 450 images, acquired from 14 patients and 16 volunteers. We used a thresholding method to remove interference in the raw images and remapped them from the original 16-bit to 8-bit, and then cropped and segmented the 8-bit images manually. Experiments using leave-one-out cross-validation (LOOCV) and comparison with the ground-truth images by using Tanimoto similarity show that the average accuracy of MultiResUnet is 91.47%, which is about 2% higher than that of the autoencoder. MultiResUnet offers a better approach to segment breast IR images than our previous model.