论文标题
AIBENCH:敏捷域特异性基准测定方法和AI基准套件
AIBench: An Agile Domain-specific Benchmarking Methodology and an AI Benchmark Suite
论文作者
论文摘要
特定于域的软件和硬件共同设计令人鼓舞,因为对于更少的任务提高效率要容易得多。敏捷域特定的基准测试加快了该过程,因为它不仅提供了相关的设计输入,还提供相关的指标和工具。不幸的是,大数据,AI和Internet服务等现代工作负载在代码大小,部署量表和执行路径方面使传统的工作相形见war,因此提出了严重的基准测试挑战。 本文提出了一种敏捷域特异性的基准测定方法。与十七个行业合作伙伴一起,我们确定了十种重要的端到端应用程序方案,其中有16个代表性AI任务被蒸馏为AI组件基准。我们提出了必不可少的AI和非AI组分基准作为端到端基准的排列。端到端的基准是对行业规模应用的基本属性的提炼。我们设计并实施了高度可扩展,可配置和灵活的基准标准框架,在此基础上,我们为构建端到端基准的指南提供了指南,并介绍了第一个端到端的Internet Service AI基准。 初步评估显示了我们的基准套件的价值 - Aibench对硬件和软件设计人员,微架构研究人员和代码开发人员的MLPerf和Tailbench的价值。从网站\ url {http://www.benchcouncil.org/aibench/aibench/index.html}公开可用规格,源代码,测试台和结果。
Domain-specific software and hardware co-design is encouraging as it is much easier to achieve efficiency for fewer tasks. Agile domain-specific benchmarking speeds up the process as it provides not only relevant design inputs but also relevant metrics, and tools. Unfortunately, modern workloads like Big data, AI, and Internet services dwarf the traditional one in terms of code size, deployment scale, and execution path, and hence raise serious benchmarking challenges. This paper proposes an agile domain-specific benchmarking methodology. Together with seventeen industry partners, we identify ten important end-to-end application scenarios, among which sixteen representative AI tasks are distilled as the AI component benchmarks. We propose the permutations of essential AI and non-AI component benchmarks as end-to-end benchmarks. An end-to-end benchmark is a distillation of the essential attributes of an industry-scale application. We design and implement a highly extensible, configurable, and flexible benchmark framework, on the basis of which, we propose the guideline for building end-to-end benchmarks, and present the first end-to-end Internet service AI benchmark. The preliminary evaluation shows the value of our benchmark suite---AIBench against MLPerf and TailBench for hardware and software designers, micro-architectural researchers, and code developers. The specifications, source code, testbed, and results are publicly available from the web site \url{http://www.benchcouncil.org/AIBench/index.html}.