论文标题

时空感知的多分辨率视频增强

Space-Time-Aware Multi-Resolution Video Enhancement

论文作者

Haris, Muhammad, Shakhnarovich, Greg, Ukita, Norimichi

论文摘要

我们考虑时空超分辨率(ST-SR)的问题:增加视频帧的空间分辨率和同时插值框架以提高帧速率。现代方法一次处理这些轴。相比之下,我们提出的模型在时空中共同称为Starnet Super-super-super-losolves。这使我们能够利用时间和空间之间的相互信息的关系:更高的分辨率可以提供有关运动的更详细信息,并且更高的帧速率可以提供更好的像素对齐。我们模型在ST-SR期间产生潜在低和高分辨率表示的模型的组成部分可用于对仅空间或暂时超级分辨率的专业机制进行验证。实验结果表明,Starnet通过在公开可用的数据集上大量利润来改善时空,空间和时间视频超分辨率的性能。

We consider the problem of space-time super-resolution (ST-SR): increasing spatial resolution of video frames and simultaneously interpolating frames to increase the frame rate. Modern approaches handle these axes one at a time. In contrast, our proposed model called STARnet super-resolves jointly in space and time. This allows us to leverage mutually informative relationships between time and space: higher resolution can provide more detailed information about motion, and higher frame-rate can provide better pixel alignment. The components of our model that generate latent low- and high-resolution representations during ST-SR can be used to finetune a specialized mechanism for just spatial or just temporal super-resolution. Experimental results demonstrate that STARnet improves the performances of space-time, spatial, and temporal video super-resolution by substantial margins on publicly available datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源