论文标题
视频动作预期的统一复发建模
Unified Recurrence Modeling for Video Action Anticipation
论文作者
论文摘要
根据当前状况的证据预测未来的事件是人类的先天技能,也是预测任何决策结果的关键。例如,在人造视觉中,我们想在下一个人类行动发生之前预测它,而不必观察与之相关的未来视频框架。预期采取行动的计算机视觉模型将在目标行动的序言中收集微妙的证据。在先前的研究中,复发建模通常会导致更好的性能,认为强有力的时间推断被认为是合理预测的关键要素。为此,我们通过消息传递框架提出了一个统一的复发模型,以预期视频动作。时空中的信息流可以通过顶点和边缘之间的相互作用来描述,每个传入框架的顶点变化反映了基本动力学。我们的模型利用自我注意力作为每个消息传递功能的构建块。此外,我们引入了不同的边缘学习策略,这些策略可以进行端到端进行优化,以便为顶点之间的连通性提供更好的灵活性。我们的实验结果表明,我们提出的方法在大规模Epic-Kitchen数据集上的先前作品优于先前的作品。
Forecasting future events based on evidence of current conditions is an innate skill of human beings, and key for predicting the outcome of any decision making. In artificial vision for example, we would like to predict the next human action before it happens, without observing the future video frames associated to it. Computer vision models for action anticipation are expected to collect the subtle evidence in the preamble of the target actions. In prior studies recurrence modeling often leads to better performance, the strong temporal inference is assumed to be a key element for reasonable prediction. To this end, we propose a unified recurrence modeling for video action anticipation via message passing framework. The information flow in space-time can be described by the interaction between vertices and edges, and the changes of vertices for each incoming frame reflects the underlying dynamics. Our model leverages self-attention as the building blocks for each of the message passing functions. In addition, we introduce different edge learning strategies that can be end-to-end optimized to gain better flexibility for the connectivity between vertices. Our experimental results demonstrate that our proposed method outperforms previous works on the large-scale EPIC-Kitchen dataset.