论文标题

通过动态照明校正和双视语义融合的伤口分割

Wound Segmentation with Dynamic Illumination Correction and Dual-view Semantic Fusion

论文作者

Liu, Honghui, Wang, Changjian, Xu, Kele, Li, Fangzhao, Feng, Ming, Peng, Yuxing, He, Hongjun

论文摘要

伤口图像分割是伤口临床诊断和时间治疗的关键成分。最近,深度学习已成为伤口图像分割的主流方法。但是,在训练阶段之前,需要在训练阶段进行预处理,例如照明校正,因为可以大大提高性能。校正程序和深层模型的训练彼此独立,这会导致次优部分性能,因为固定的照明校正可能不适合所有图像。为了解决上述问题,本文提出了一种端到端的双视图细分方法,它通过将可学习的照明校正模块纳入深度细分模型中。模块的参数可以在训练阶段自动学习和更新,而双视融合可以完全利用RAW图像和增强图像的功能。为了证明所提出的框架的有效性和鲁棒性,在基准数据集上进行了广泛的实验。令人鼓舞的结果表明,与最先进的方法相比,我们的框架可以显着改善细分性能。

Wound image segmentation is a critical component for the clinical diagnosis and in-time treatment of wounds. Recently, deep learning has become the mainstream methodology for wound image segmentation. However, the pre-processing of the wound image, such as the illumination correction, is required before the training phase as the performance can be greatly improved. The correction procedure and the training of deep models are independent of each other, which leads to sub-optimal segmentation performance as the fixed illumination correction may not be suitable for all images. To address aforementioned issues, an end-to-end dual-view segmentation approach was proposed in this paper, by incorporating a learn-able illumination correction module into the deep segmentation models. The parameters of the module can be learned and updated during the training stage automatically, while the dual-view fusion can fully employ the features from both the raw images and the enhanced ones. To demonstrate the effectiveness and robustness of the proposed framework, the extensive experiments are conducted on the benchmark datasets. The encouraging results suggest that our framework can significantly improve the segmentation performance, compared to the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源