论文标题
深度运动盲视频稳定
Deep Motion Blind Video Stabilization
论文作者
论文摘要
尽管计算机视觉中的生成模型领域取得了进步,但视频稳定仍然缺乏纯粹的基于深层学习的配方。由于缺乏包含具有相似透视图但运动不同的视频对的数据集,因此通常在明确运动估计模块的帮助下进行了深层视频稳定。因此,该任务的深度学习方法在像素级的综合潜在稳定帧中很难,并诉诸于不稳定框架间接转换以稳定框架的运动估计模块,从而导致框架边界附近的视觉内容丧失。 In this work, we aim to declutter this over-complicated formulation of video stabilization with the help of a novel dataset that contains pairs of training videos with similar perspective but different motion, and verify its effectiveness by successfully learning motion blind full-frame video stabilization through employing strictly conventional generative techniques and further improve the stability through a curriculum-learning inspired adversarial training strategy.通过广泛的实验,我们显示了最先进的视频稳定方法所提出的方法的定量和定性优势。此外,我们的方法在当前最快的视频稳定方法上实现了$ \ sim3 \ times $加速。
Despite the advances in the field of generative models in computer vision, video stabilization still lacks a pure regressive deep-learning-based formulation. Deep video stabilization is generally formulated with the help of explicit motion estimation modules due to the lack of a dataset containing pairs of videos with similar perspective but different motion. Therefore, the deep learning approaches for this task have difficulties in the pixel-level synthesis of latent stabilized frames, and resort to motion estimation modules for indirect transformations of the unstable frames to stabilized frames, leading to the loss of visual content near the frame boundaries. In this work, we aim to declutter this over-complicated formulation of video stabilization with the help of a novel dataset that contains pairs of training videos with similar perspective but different motion, and verify its effectiveness by successfully learning motion blind full-frame video stabilization through employing strictly conventional generative techniques and further improve the stability through a curriculum-learning inspired adversarial training strategy. Through extensive experimentation, we show the quantitative and qualitative advantages of the proposed approach to the state-of-the-art video stabilization approaches. Moreover, our method achieves $\sim3\times$ speed-up over the currently available fastest video stabilization methods.