论文标题

深度时空的时空STFT卷积神经网络,用于人类行动识别

Depthwise Spatio-Temporal STFT Convolutional Neural Networks for Human Action Recognition

论文作者

Kumawat, Sudhakar, Verma, Manisha, Nakashima, Yuta, Raman, Shanmuganathan

论文摘要

常规的3D卷积神经网络(CNN)在计算上昂贵,内存密集,容易拟合,最重要的是,有必要提高其功能学习能力。为了解决这些问题,我们提出了时空短期傅立叶变换(STFT)块,这是一种新的卷积块,可以作为3D卷积层及其在3D CNN中的变体的替代方案。 STFT块由不可训练的卷积层组成,该卷积层是在多个低频点上使用STFT内核在空间和/或时间上局部傅立叶信息捕获的,其次是一组可训练的线性权重以用于学习通道相关性。 STFT块显着降低了3D CNN中的时空复杂性。通常,与最先进的方法相比,他们使用的参数减少了3.5至4.5倍,计算成本少1.5至1.8倍。此外,它们的特征学习能力明显优于常规的3D卷积层及其变体。我们对七个动作识别数据集的广泛评估,包括某些事物V1和V2,Jester,Jester,潜水48,Kinetics-400,UCF 101和HMDB 51,证明了基于STFT块的3D CNN在PAR上与正面的方法相比,基于STFT块的3D CNN在PAR上取得了更好的成就。

Conventional 3D convolutional neural networks (CNNs) are computationally expensive, memory intensive, prone to overfitting, and most importantly, there is a need to improve their feature learning capabilities. To address these issues, we propose spatio-temporal short term Fourier transform (STFT) blocks, a new class of convolutional blocks that can serve as an alternative to the 3D convolutional layer and its variants in 3D CNNs. An STFT block consists of non-trainable convolution layers that capture spatially and/or temporally local Fourier information using a STFT kernel at multiple low frequency points, followed by a set of trainable linear weights for learning channel correlations. The STFT blocks significantly reduce the space-time complexity in 3D CNNs. In general, they use 3.5 to 4.5 times less parameters and 1.5 to 1.8 times less computational costs when compared to the state-of-the-art methods. Furthermore, their feature learning capabilities are significantly better than the conventional 3D convolutional layer and its variants. Our extensive evaluation on seven action recognition datasets, including Something-something v1 and v2, Jester, Diving-48, Kinetics-400, UCF 101, and HMDB 51, demonstrate that STFT blocks based 3D CNNs achieve on par or even better performance compared to the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源