论文标题
Shannon和Tsallis Havrda Charvat熵的定量比较应用于癌症结果预测
A Quantitative Comparison between Shannon and Tsallis Havrda Charvat Entropies Applied to Cancer Outcome Prediction
论文作者
论文摘要
在本文中,我们建议根据参数化的tsallis-havrda-charvat熵和经典的香农熵进行定量比较损失函数,以训练在医疗应用中通常会遇到的小数据集的深层网络。 Shannon跨凝结被广泛用作用于分割,分类和检测图像的大多数神经网络的损失函数。香农熵是Tsallis-Havrda-Charvat熵的特殊情况。在这项工作中,我们通过医学应用程序比较了这两个熵,以预测治疗后头颈和肺癌患者的复发。基于CT图像和患者信息,提出了多任务深神经网络,以使用跨凝性作为损失函数和图像重建任务执行复发预测任务。 tsallis-havrda-charvat横截面是一个参数化的跨熵,具有参数$α$。香农熵是$α$ = 1的tsallis-havrda-charvat熵的特殊情况。该参数对最终预测结果的影响。在本文中,实验是在两个数据集上进行的,包括总共580名患者,其中434例患有头颈癌和146例肺癌。结果表明,Tsallis-Havrda-Charvat熵可以在预测准确性方面获得更好的性能,而某些值为$α$。
In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis-Havrda-Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications. Shannon cross-entropy is widely used as a loss function for most neural networks applied to the segmentation, classification and detection of images. Shannon entropy is a particular case of Tsallis-Havrda-Charvat entropy. In this work, we compare these two entropies through a medical application for predicting recurrence in patients with head-neck and lung cancers after treatment. Based on both CT images and patient information, a multitask deep neural network is proposed to perform a recurrence prediction task using cross-entropy as a loss function and an image reconstruction task. Tsallis-Havrda-Charvat cross-entropy is a parameterized cross entropy with the parameter $α$. Shannon entropy is a particular case of Tsallis-Havrda-Charvat entropy for $α$ = 1. The influence of this parameter on the final prediction results is studied. In this paper, the experiments are conducted on two datasets including in total 580 patients, of whom 434 suffered from head-neck cancers and 146 from lung cancers. The results show that Tsallis-Havrda-Charvat entropy can achieve better performance in terms of prediction accuracy with some values of $α$.