论文标题

感知多曝光融合

Perceptual Multi-Exposure Fusion

论文作者

Liu, Xiaoning

论文摘要

作为对高动态范围(HDR)场景拍摄的不断增长的需求,多曝光图像融合(MEF)技术已经存在。近年来,基于细节增强的多尺度曝光融合方法为改进了高光和阴影细节带来了道路。但是,大多数此类方法在计算上太昂贵了,无法部署在移动设备上。本文提出了一种感知的多曝光融合方法,该方法不仅确保了细节细节,而且要比细节的方法较低。我们分析了三种经典暴露措施的潜在缺陷,以代替使用细节增强成分,并改善其中两个,即自适应响应性(AWE)和颜色图像的梯度(3-D梯度)。在YCBCR颜色空间中设计的敬畏之作者考虑了不同的曝光图像之间的差异。使用3-D梯度提取细节。我们构建了一个适用于静态场景的大规模多Exposure基准数据集,其中包含167个图像序列。在构造数据集上的实验表明,就视觉和MEF-SSIM值而言,该提出的方法超过了现有的八种最新方法。此外,我们的方法可以更好地改进当前图像增强技术,从而确保在明亮的光线下进行精细的细节。

As an ever-increasing demand for high dynamic range (HDR) scene shooting, multi-exposure image fusion (MEF) technology has abounded. In recent years, multi-scale exposure fusion approaches based on detail-enhancement have led the way for improvement in highlight and shadow details. Most of such methods, however, are too computationally expensive to be deployed on mobile devices. This paper presents a perceptual multi-exposure fusion method that not just ensures fine shadow/highlight details but with lower complexity than detailenhanced methods. We analyze the potential defects of three classical exposure measures in lieu of using detail-enhancement component and improve two of them, namely adaptive Wellexposedness (AWE) and the gradient of color images (3-D gradient). AWE designed in YCbCr color space considers the difference between varying exposure images. 3-D gradient is employed to extract fine details. We build a large-scale multiexposure benchmark dataset suitable for static scenes, which contains 167 image sequences all told. Experiments on the constructed dataset demonstrate that the proposed method exceeds existing eight state-of-the-art approaches in terms of visually and MEF-SSIM value. Moreover, our approach can achieve a better improvement for current image enhancement techniques, ensuring fine detail in bright light.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源