论文标题

二流AMTNET进行动作检测

Two-Stream AMTnet for Action Detection

论文作者

Saha, Suman, Singh, Gurkirt, Cuzzolin, Fabio

论文摘要

在本文中,我们提出了两流AMTNET,该AMTNET利用了基于视频的动作表示[1]和增量动作管[2]的最新进展。当前的大多数动作探测器都遵循基于框架的表示,后来融合,然后是离线动作管的建筑步骤。这些是最佳的,如:基于框架的特征几乎没有编码时间关系;晚融合限制了网络以学习强大的时空特征。最后,离线动作管的生成不适用于许多现实世界中的问题,例如自动驾驶,人类机器人互动,仅举几例。这项工作的主要贡献是:(1)将Amtnet的3D提案体系结构与在线操作管生成技术相结合,该技术使模型可以学习精确的动作检测所需的更强大的时间功能,并促进在线运行推理; (2)一种有效的融合技术,使深网可以学习强有力的时空作用表示。这是通过以三种不同的方式增强先前的动作微管(AMTNET)动作检测框架来实现的:通过添加平行运动固定本文,我们提出了一种新的深神经网络架构,用于在线操作检测,称为Amtnet中的原始外观。 (2)反对分别训练外观和运动流的最先进的动作探测器,并使用测试时间后期的融合方案来融合RGB和流动提示,通过在训练时间中以端到端的方式培训两个流以及端到端的RGB和光流功能; (3)引入在线操作管生成算法,该算法可在视频级别和实时(仅利用外观功能)实时工作。两流AMTNET在标准动作检测基准的最新方法上表现出优异的动作检测性能。

In this paper, we propose Two-Stream AMTnet, which leverages recent advances in video-based action representation[1] and incremental action tube generation[2]. Majority of the present action detectors follow a frame-based representation, a late-fusion followed by an offline action tube building steps. These are sub-optimal as: frame-based features barely encode the temporal relations; late-fusion restricts the network to learn robust spatiotemporal features; and finally, an offline action tube generation is not suitable for many real-world problems such as autonomous driving, human-robot interaction to name a few. The key contributions of this work are: (1) combining AMTnet's 3D proposal architecture with an online action tube generation technique which allows the model to learn stronger temporal features needed for accurate action detection and facilitates running inference online; (2) an efficient fusion technique allowing the deep network to learn strong spatiotemporal action representations. This is achieved by augmenting the previous Action Micro-Tube (AMTnet) action detection framework in three distinct ways: by adding a parallel motion stIn this paper, we propose a new deep neural network architecture for online action detection, termed ream to the original appearance one in AMTnet; (2) in opposition to state-of-the-art action detectors which train appearance and motion streams separately, and use a test time late fusion scheme to fuse RGB and flow cues, by jointly training both streams in an end-to-end fashion and merging RGB and optical flow features at training time; (3) by introducing an online action tube generation algorithm which works at video-level, and in real-time (when exploiting only appearance features). Two-Stream AMTnet exhibits superior action detection performance over state-of-the-art approaches on the standard action detection benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源