论文标题

LiteVL:具有增强时空建模的有效视频学习

LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling

论文作者

Chen, Dongsheng, Tao, Chaofan, Hou, Lu, Shang, Lifeng, Jiang, Xin, Liu, Qun

论文摘要

最近的大规模视频预培训模型显示出在各种下游任务上的表现吸引人。但是,由于需要数百万个视频文本对以及每个视频的冗余数据结构,因此训练过程在计算上很昂贵。为了减轻这些问题,我们提出了LiteVL,该LiteVL将预先训练的图像语言模型BLIP调整为直接在下游任务上的视频文本模型中,而无需大量的预训练。为了增强图像语言模型中缺乏的时间建模,我们建议在Blip的图像编码器中添加时间缩放的时间注意模块。除了通过模型的适应性,我们还提出了一种非参数合并机制,以适应在文本中调节的细颗粒视频嵌入。文本视频检索和视频问题回答的实验结果表明,拟议的LiteVL甚至可以明确的差距更优于先前的视频语言预训练模型,尽管没有任何视频语言预先培训。

Recent large-scale video-language pre-trained models have shown appealing performance on various downstream tasks. However, the pre-training process is computationally expensive due to the requirement of millions of video-text pairs and the redundant data structure of each video. To mitigate these problems, we propose LiteVL, which adapts a pre-trained image-language model BLIP into a video-text model directly on downstream tasks, without heavy pre-training. To enhance the temporal modeling lacking in the image-language model, we propose to add temporal attention modules in the image encoder of BLIP with dynamic temporal scaling. Besides the model-wise adaptation, we also propose a non-parametric pooling mechanism to adaptively reweight the fine-grained video embedding conditioned on the text. Experimental results on text-video retrieval and video question answering show that the proposed LiteVL even outperforms previous video-language pre-trained models by a clear margin, though without any video-language pre-training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源