论文标题

多数网络:BNNS利用近似爆炸案以提高效率

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

论文作者

Rasoulinezhad, Seyedramin, Fox, Sean, Zhou, Hao, Wang, Lingli, Boland, David, Leong, Philip H. W.

论文摘要

二进制神经网络(BNN)在嵌入式实现中使用神经网络显示出令人兴奋的潜力,在嵌入式实现中,区域,能量和潜伏期约束至关重要。使用BNN,可以将多元蓄电(MAC)操作简化为Xnorpopcount操作,从而大大减少内存和计算资源。此外,已经在现场可编程栅极阵列(FPGA)实现中报告了BNN的多个有效实现。本文提出了一个较小,更快,更节能的近似替代Xnorpopcounteration,称为Xnormaj,灵感来自最先进的FPGALOOK-UP-UP台式方案,这些方案使FPGA实现受益。 Xnormaj比Xnorpopcount操作高2倍的资源效率高2倍。尽管XNORMAJ操作对准确性有很小的有害影响,但资源节省使我们能够使用较大的网络来恢复损失。

Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount. With BNNs, multiply-accumulate (MAC) operations can be simplified to XnorPopcount operations, leading to massive reductions in both memory and computation resources. Furthermore, multiple efficient implementations of BNNs have been reported on field-programmable gate array (FPGA) implementations. This paper proposes a smaller, faster, more energy-efficient approximate replacement for the XnorPopcountoperation, called XNorMaj, inspired by state-of-the-art FPGAlook-up table schemes which benefit FPGA implementations. Weshow that XNorMaj is up to 2x more resource-efficient than the XnorPopcount operation. While the XNorMaj operation has a minor detrimental impact on accuracy, the resource savings enable us to use larger networks to recover the loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源