论文标题

从野外的焦点学习深度

Learning Depth from Focus in the Wild

论文作者

Won, Changyeon, Jeon, Hae-Gon

论文摘要

为了获得更好的摄影,包括智能手机在内的最新商用摄像机要么采用大孔镜来收集更多的光线,要么使用突发模式在短时间内拍摄多个图像。这些有趣的功能使我们能够检查焦点/散焦的深度。 在这项工作中,我们提出了来自单个焦点堆栈的基于卷积神经网络的深度估计。我们的方法不同于相关的最新方法,具有三个独特的功能。首先,我们的方法允许以端到端方式推断深度图,即使图像对齐也是如此。其次,我们提出了一个尖锐的区域检测模块,以减少细微的重点变化和无纹理的区域中的模糊歧义。第三,我们设计了一个有效的下采样模块,以减轻特征提取中焦点信息的流动。此外,为了概括拟议的网络,我们开发了一个模拟器来实际重现商用相机的特征,例如视野的变化,焦点长度和主要点。 通过有效合并这三个独特功能,我们的网络在大多数指标上达到了DDFF 12场景基准的最高等级。我们还证明了所提出的方法对与最新方法相比,从各种现成的摄像机拍摄的各种定量评估和现实图像的有效性。我们的源代码可在https://github.com/wcy199705/dffinthewild上公开获得。

For better photography, most recent commercial cameras including smartphones have either adopted large-aperture lens to collect more light or used a burst mode to take multiple images within short times. These interesting features lead us to examine depth from focus/defocus. In this work, we present a convolutional neural network-based depth estimation from single focal stacks. Our method differs from relevant state-of-the-art works with three unique features. First, our method allows depth maps to be inferred in an end-to-end manner even with image alignment. Second, we propose a sharp region detection module to reduce blur ambiguities in subtle focus changes and weakly texture-less regions. Third, we design an effective downsampling module to ease flows of focal information in feature extractions. In addition, for the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras, such as changes in field of view, focal length and principal points. By effectively incorporating these three unique features, our network achieves the top rank in the DDFF 12-Scene benchmark on most metrics. We also demonstrate the effectiveness of the proposed method on various quantitative evaluations and real-world images taken from various off-the-shelf cameras compared with state-of-the-art methods. Our source code is publicly available at https://github.com/wcy199705/DfFintheWild.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源