论文标题
VIT-RET:视频中人类活动识别的视觉和反复变压器神经网络
ViT-ReT: Vision and Recurrent Transformer Neural Networks for Human Activity Recognition in Videos
论文作者
论文摘要
人类活动识别是计算机视觉中新兴而重要的领域,旨在确定个体或个体正在执行的活动。该领域的应用范围从体育中的重点视频到智能监视和手势识别。大多数活动识别系统依赖于卷积神经网络(CNN)的组合来从数据和复发性神经网络(RNN)中进行特征提取来确定数据的时间依赖性。本文提出并设计了两个用于人类活动识别的变压器神经网络:一个经过的变压器(RET),一种专门的神经网络,用于对数据序列进行预测,以及视觉变压器(VIT),一种优化了用于从图像中提取显着特征的变压器,以提高活性识别的速度和可扩展性。我们在速度和准确性方面提供了对拟议的变压器神经网络与现代CNN和基于RNN的人类活动识别模型的广泛比较。
Human activity recognition is an emerging and important area in computer vision which seeks to determine the activity an individual or group of individuals are performing. The applications of this field ranges from generating highlight videos in sports, to intelligent surveillance and gesture recognition. Most activity recognition systems rely on a combination of convolutional neural networks (CNNs) to perform feature extraction from the data and recurrent neural networks (RNNs) to determine the time dependent nature of the data. This paper proposes and designs two transformer neural networks for human activity recognition: a recurrent transformer (ReT), a specialized neural network used to make predictions on sequences of data, as well as a vision transformer (ViT), a transformer optimized for extracting salient features from images, to improve speed and scalability of activity recognition. We have provided an extensive comparison of the proposed transformer neural networks with the contemporary CNN and RNN-based human activity recognition models in terms of speed and accuracy.