论文标题
定制数字表示和精度
Customizing Number Representation and Precision
论文作者
论文摘要
人们对减少精确的算术的使用越来越兴趣,这对人工智能的最新兴趣加剧了,尤其是对深度学习的兴趣。大多数体系结构已经提供了降低的精确功能(例如8位整数,16位浮点)。在FPGA的背景下,甚至可以考虑任何数字格式和位宽度。在计算机算术中,实数的表示是一个主要问题。固定点(FXP)和浮点(FLP)是代表实数的主要选项,无论是其优点和缺点。本章介绍了FXP和FLP数字表示,并在计算过程中对其成本,性能和能量之间的比较进行了公平的比较,以及它们对计算过程中的准确性的影响。这表明FXP和FLP之间的选择并不明显,并且很大程度上取决于所考虑的应用程序。在某些情况下,低精度的浮点算术可能是最有效的,并且比经典的固定点选择为能源约束的应用提供了一些好处。
There is a growing interest in the use of reduced-precision arithmetic, exacerbated by the recent interest in artificial intelligence, especially with deep learning. Most architectures already provide reduced-precision capabilities (e.g., 8-bit integer, 16-bit floating point). In the context of FPGAs, any number format and bit-width can even be considered.In computer arithmetic, the representation of real numbers is a major issue. Fixed-point (FxP) and floating-point (FlP) are the main options to represent reals, both with their advantages and drawbacks. This chapter presents both FxP and FlP number representations, and draws a fair a comparison between their cost, performance and energy, as well as their impact on accuracy during computations.It is shown that the choice between FxP and FlP is not obvious and strongly depends on the application considered. In some cases, low-precision floating-point arithmetic can be the most effective and provides some benefits over the classical fixed-point choice for energy-constrained applications.