论文标题

DIP:高分辨率光流的深度截面

DIP: Deep Inverse Patchmatch for High-Resolution Optical Flow

论文作者

Zheng, Zihua, Nie, Ni, Ling, Zhi, Xiong, Pengfei, Liu, Jiangyu, Wang, Hao, Li, Jiankun

论文摘要

最近,密集的相关量方法在光学流中实现了最先进的性能。但是,相关量计算需要大量内存,这使得在高分辨率图像上的预测变得困难。在本文中,我们提出了一个基于贴片的新型框架,以处理高分辨率光流估计。具体而言,我们介绍了第一个基于基于尾部的尾部补丁的深度学习光流。它可以获得高精度结果,而较低的存储器受益于繁殖和局部搜索补丁。此外,提出了一种新的反向传播,以使繁殖的复杂操作分离,这可以显着减少多次迭代中的计算。在提交时,我们的方法在流行的Kitti2015基准中的所有指标上排名第一,在已发表的光流方法中,在Sintel Clean Clean Benchmark上排名第二。实验表明,我们的方法具有很强的跨数据集泛化能力,F1 ALL可获得13.73%,比在Kitti2015的最佳发布结果中降低了21%。更重要的是,我们的方法在高分辨率数据集戴维斯(Davis)上显示了一个很好的细节,并且比木筏的记忆少2倍。

Recently, the dense correlation volume method achieves state-of-the-art performance in optical flow. However, the correlation volume computation requires a lot of memory, which makes prediction difficult on high-resolution images. In this paper, we propose a novel Patchmatch-based framework to work on high-resolution optical flow estimation. Specifically, we introduce the first end-to-end Patchmatch based deep learning optical flow. It can get high-precision results with lower memory benefiting from propagation and local search of Patchmatch. Furthermore, a new inverse propagation is proposed to decouple the complex operations of propagation, which can significantly reduce calculations in multiple iterations. At the time of submission, our method ranks first on all the metrics on the popular KITTI2015 benchmark, and ranks second on EPE on the Sintel clean benchmark among published optical flow methods. Experiment shows our method has a strong cross-dataset generalization ability that the F1-all achieves 13.73%, reducing 21% from the best published result 17.4% on KITTI2015. What's more, our method shows a good details preserving result on the high-resolution dataset DAVIS and consumes 2x less memory than RAFT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源