论文标题
有效缩放变压器推断
Efficiently Scaling Transformer Inference
论文作者
论文摘要
我们在其最具挑战性的环境之一中研究了变压器模型的有效生成推断的问题:大型深层模型,具有紧密的延迟目标和较长的序列长度。可以更好地理解用于推断大型变压器模型的工程折衷,因为这些模型的用例在整个应用领域中都在迅速增长。我们为推理效率开发了一个简单的分析模型,以根据应用要求选择针对TPU V4切片优化的最佳多维分区技术。我们将它们与一套低级优化相结合,以在500B+参数模型上实现延迟和模型Flops利用率(MFU)折衷的新帕累托前沿,该模型的表现优于基准的FertransTransFormer套件。我们进一步表明,通过适当的分区,多样性注意的较低内存要求(即多个查询头共享单键/值头)可以扩展高达32x更大的上下文长度。最后,我们在发电期间(使用INT8权重量化)的低批量潜伏期为每个令牌(使用INT8权重量化),在大批量的输入代币处理过程中达到了76%的MFU,同时支持Palm 540B参数模型上长2048年的2048 token上下文长度。
We study the problem of efficient generative inference for Transformer models, in one of its most challenging settings: large deep models, with tight latency targets and long sequence lengths. Better understanding of the engineering tradeoffs for inference for large Transformer-based models is important as use cases of these models are growing rapidly throughout application areas. We develop a simple analytical model for inference efficiency to select the best multi-dimensional partitioning techniques optimized for TPU v4 slices based on the application requirements. We combine these with a suite of low-level optimizations to achieve a new Pareto frontier on the latency and model FLOPS utilization (MFU) tradeoffs on 500B+ parameter models that outperforms the FasterTransformer suite of benchmarks. We further show that with appropriate partitioning, the lower memory requirements of multiquery attention (i.e. multiple query heads share single key/value head) enables scaling up to 32x larger context lengths. Finally, we achieve a low-batch-size latency of 29ms per token during generation (using int8 weight quantization) and a 76% MFU during large-batch-size processing of input tokens, while supporting a long 2048-token context length on the PaLM 540B parameter model.