论文标题
低精度神经网络中的能源意识
Energy awareness in low precision neural networks
论文作者
论文摘要
功耗是在最终设备上部署深神经网络(DNN)的主要障碍。减少功耗的现有方法取决于相当一般的原则,包括避免乘法操作以及对权重和激活的积极量化。但是,这些方法没有考虑到网络中每个模块所消耗的精确功率,因此并不是最佳的。在本文中,我们在各种工作条件下为DNN的所有算术操作开发了准确的功耗模型。我们揭示了迄今已忽略的几个重要因素。基于我们的分析,我们提出了PANN(功能感知的神经网络),这是一种通过低功率固定精神变体近似任何完整精神网络的简单方法。我们的方法可以应用于预训练的网络,也可以在培训期间使用以提高性能。与以前的方法相反,PANN仅在准确性W.R.T.中造成较小的降解。该网络的完整版本,即使在2位量化变体的电力预算工作时也是如此。此外,我们的方案使得在部署时间无缝地穿越功率准确性权衡,这比被约束在特定位宽度的现有量化方法的主要优势。
Power consumption is a major obstacle in the deployment of deep neural networks (DNNs) on end devices. Existing approaches for reducing power consumption rely on quite general principles, including avoidance of multiplication operations and aggressive quantization of weights and activations. However, these methods do not take into account the precise power consumed by each module in the network, and are therefore not optimal. In this paper we develop accurate power consumption models for all arithmetic operations in the DNN, under various working conditions. We reveal several important factors that have been overlooked to date. Based on our analysis, we present PANN (power-aware neural network), a simple approach for approximating any full-precision network by a low-power fixed-precision variant. Our method can be applied to a pre-trained network, and can also be used during training to achieve improved performance. In contrast to previous methods, PANN incurs only a minor degradation in accuracy w.r.t. the full-precision version of the network, even when working at the power-budget of a 2-bit quantized variant. In addition, our scheme enables to seamlessly traverse the power-accuracy trade-off at deployment time, which is a major advantage over existing quantization methods that are constrained to specific bit widths.