论文标题
实时自拍视频稳定
Real-Time Selfie Video Stabilization
论文作者
论文摘要
我们提出了一种新颖的实时自拍视频稳定方法。我们的方法是完全自动的,以26 fps运行。我们使用1D线性卷积网络直接推断刚性移动的最小二乘翘曲,从而隐含地平衡了全球刚度和局部灵活性。我们的网络结构专门设计用于同时稳定背景和前景,同时为用户提供可选的稳定焦点(前景与背景的相对重要性)。为了培训我们的网络,我们收集了一个带有1005个视频的自拍视频数据集,该数据集比以前的自拍视频数据集大得多。我们还为刚性移动最小二乘翘曲的刚性移动最小二乘扭曲提出了一种网格近似方法,从而实现实时框架翘曲。我们的方法是全自动的,并且在视觉和定量上比以前的实时常规视频稳定方法更好地产生了结果。与以前的离线自拍照视频方法相比,我们的方法可产生可比的质量,并速度提高数量级。
We propose a novel real-time selfie video stabilization method. Our method is completely automatic and runs at 26 fps. We use a 1D linear convolutional network to directly infer the rigid moving least squares warping which implicitly balances between the global rigidity and local flexibility. Our network structure is specifically designed to stabilize the background and foreground at the same time, while providing optional control of stabilization focus (relative importance of foreground vs. background) to the users. To train our network, we collect a selfie video dataset with 1005 videos, which is significantly larger than previous selfie video datasets. We also propose a grid approximation method to the rigid moving least squares warping that enables the real-time frame warping. Our method is fully automatic and produces visually and quantitatively better results than previous real-time general video stabilization methods. Compared to previous offline selfie video methods, our approach produces comparable quality with a speed improvement of orders of magnitude.