论文标题

上还是向下?自适应舍入训练后量化

Up or Down? Adaptive Rounding for Post-Training Quantization

论文作者

Nagel, Markus, Amjad, Rana Ali, van Baalen, Mart, Louizos, Christos, Blankevoort, Tijmen

论文摘要

当量化神经网络时,将每个浮点重量分配给其最近的固定点值是主要方法。我们发现,也许令人惊讶的是,这不是我们能做的最好的。在本文中,我们提出了Adaround,这是适应数据和任务损失的培训后量化的更好的重量调整机制。 Adaround是快速的,不需要对网络进行微调,并且仅使用少量未标记的数据。我们从理论上分析预先训练的神经网络的舍入问题开始。通过通过泰勒系列扩展近似任务损失,将舍入任务作为二次无约束的二进制优化问题提出。我们将其简化为层次的本地损失,并建议通过软放松来优化这种损失。 Adaround不仅要超过大量的边距到达最终的范围,而且还建立了在多个网络和任务上进行培训后量化的新最新量。如果不进行微调,我们可以将RESNET18和RESNET50至4位的权重量化,同时保持精确损失1%。

When quantizing neural networks, assigning each floating-point weight to its nearest fixed-point value is the predominant approach. We find that, perhaps surprisingly, this is not the best we can do. In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss. AdaRound is fast, does not require fine-tuning of the network, and only uses a small amount of unlabelled data. We start by theoretically analyzing the rounding problem for a pre-trained neural network. By approximating the task loss with a Taylor series expansion, the rounding task is posed as a quadratic unconstrained binary optimization problem. We simplify this to a layer-wise local loss and propose to optimize this loss with a soft relaxation. AdaRound not only outperforms rounding-to-nearest by a significant margin but also establishes a new state-of-the-art for post-training quantization on several networks and tasks. Without fine-tuning, we can quantize the weights of Resnet18 and Resnet50 to 4 bits while staying within an accuracy loss of 1%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源