论文标题
文本衍生的知识有助于视觉:一种简单的跨模式蒸馏,用于基于视频的动作预期
Text-Derived Knowledge Helps Vision: A Simple Cross-modal Distillation for Video-based Action Anticipation
论文作者
论文摘要
预期视频中未来的动作对于许多自主和辅助技术很有用。大多数先前的动作预期工作将其视为视觉方式问题,其中模型主要从动作预期数据集中的视频功能中学习任务信息。但是,关于动作序列的知识也可以从外部文本数据中获得。在这项工作中,我们展示了如何将验证的语言模型中的知识调整并蒸馏到基于视觉的动作预期模型中。我们表明,一种简单的蒸馏技术可以实现有效的知识转移,并为两个动作预期数据集(Egtea Gaze+ 3.5%的相对增益+ 3.5%的相对增益和Epic-Kitchen 55的相对增益)提供一致的增益(预期视觉变压器),从而给出了一个新的State-thate-teat-thest-teart-Art结果。
Anticipating future actions in a video is useful for many autonomous and assistive technologies. Most prior action anticipation work treat this as a vision modality problem, where the models learn the task information primarily from the video features in the action anticipation datasets. However, knowledge about action sequences can also be obtained from external textual data. In this work, we show how knowledge in pretrained language models can be adapted and distilled into vision-based action anticipation models. We show that a simple distillation technique can achieve effective knowledge transfer and provide consistent gains on a strong vision model (Anticipative Vision Transformer) for two action anticipation datasets (3.5% relative gain on EGTEA-GAZE+ and 7.2% relative gain on EPIC-KITCHEN 55), giving a new state-of-the-art result.