论文标题

深度神经网络中的基于WALSH-HADAMARD转换基于二元层

Block Walsh-Hadamard Transform Based Binary Layers in Deep Neural Networks

论文作者

Pan, Hongyi, Badawi, Diaa, Cetin, Ahmet Enis

论文摘要

卷积一直是现代深度神经网络的核心运作。众所周知,可以在傅立叶变换域中实施卷积。在本文中,我们建议使用二进制块Walsh-Hadamard变换(WHT),而不是傅立叶变换。我们使用基于WHT的二元层来替换深神经网络中的一些常规卷积层。我们在本文中同时利用一维(1-D)和二维(2-D)二进制WHT。在1-D和2-D层中,我们计算输入特征映射的二进制WHT,并使用通过将软阈值与TANH函数相结合而获得的非线性来确定WHT域系数。 deNOSIND后,我们计算逆WHT。我们使用1d-wht替换$ 1 \ times 1 $卷积层,而2D-WHT层可以替换3 $ \ times $ 3 $ 3的卷积层和挤压和优先层的层。在全球平均池(GAP)层之前,还可以插入具有可训练重量的2D层,以帮助密集的层。这样,我们可以大大减少可训练参数的数量,而可训练的参数略有减少。在本文中,我们将WHT层实施到Mobilenet-V2,Mobilenet-V3-Large和Resnet中,以大大减少参数的数量,而准确性损失可忽略不计。此外,根据我们的速度测试,2D-FWHT层的运行速度约为常规$ 3 \ times 3 $ 3 $卷积的24倍,而Nvidia Jetson Nano实验中的RAM使用率减少了19.51 \%。

Convolution has been the core operation of modern deep neural networks. It is well-known that convolutions can be implemented in the Fourier Transform domain. In this paper, we propose to use binary block Walsh-Hadamard transform (WHT) instead of the Fourier transform. We use WHT-based binary layers to replace some of the regular convolution layers in deep neural networks. We utilize both one-dimensional (1-D) and two-dimensional (2-D) binary WHTs in this paper. In both 1-D and 2-D layers, we compute the binary WHT of the input feature map and denoise the WHT domain coefficients using a nonlinearity which is obtained by combining soft-thresholding with the tanh function. After denoising, we compute the inverse WHT. We use 1D-WHT to replace the $1\times 1$ convolutional layers, and 2D-WHT layers can replace the 3$\times$3 convolution layers and Squeeze-and-Excite layers. 2D-WHT layers with trainable weights can be also inserted before the Global Average Pooling (GAP) layers to assist the dense layers. In this way, we can reduce the number of trainable parameters significantly with a slight decrease in trainable parameters. In this paper, we implement the WHT layers into MobileNet-V2, MobileNet-V3-Large, and ResNet to reduce the number of parameters significantly with negligible accuracy loss. Moreover, according to our speed test, the 2D-FWHT layer runs about 24 times as fast as the regular $3\times 3$ convolution with 19.51\% less RAM usage in an NVIDIA Jetson Nano experiment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源