论文标题
部分可观测时空混沌系统的无模型预测
Improving Top-K Decoding for Non-Autoregressive Semantic Parsing via Intent Conditioning
论文作者
论文摘要
语义解析(SP)是现代虚拟助手(如Google Assistant和Amazon Alexa)的核心组成部分。虽然基于序列到序列的自动回归方法(AR)方法对于对话语义解析是常见的,但最近的研究采用非自动回旋(NAR)解码器并减少推理潜伏期,同时保持竞争性解析质量。但是,NAR解码器的主要缺点是难以通过诸如梁搜索之类的方法生成top-k(即k-test)输出。为了应对这一挑战,我们提出了一种新颖的NAR语义解析器,该解析器介绍了对解码器的意图条件。受传统意图和插槽标记解析器的启发,我们将其余部分的顶级意图预测解脱出来。由于顶级意图在很大程度上控制了解析的语法和语义,因此意图条件允许模型更好地控制光束搜索并提高TOP-K输出的质量和多样性。我们介绍了一种混合教师的方法,以避免训练和推理不匹配。我们在对话SP数据集,TOP和TOPV2上评估了拟议的NAR。像现有的NAR模型一样,我们保持O(1)解码时间复杂性,同时产生更多样化的输出,并将前3个精确匹配(EM)提高2.4分。与AR模型相比,我们的模型在具有竞争性TOP-K EM的CPU上加快了光束搜索推断的6.7倍。
Semantic parsing (SP) is a core component of modern virtual assistants like Google Assistant and Amazon Alexa. While sequence-to-sequence-based auto-regressive (AR) approaches are common for conversational semantic parsing, recent studies employ non-autoregressive (NAR) decoders and reduce inference latency while maintaining competitive parsing quality. However, a major drawback of NAR decoders is the difficulty of generating top-k (i.e., k-best) outputs with approaches such as beam search. To address this challenge, we propose a novel NAR semantic parser that introduces intent conditioning on the decoder. Inspired by the traditional intent and slot tagging parsers, we decouple the top-level intent prediction from the rest of a parse. As the top-level intent largely governs the syntax and semantics of a parse, the intent conditioning allows the model to better control beam search and improves the quality and diversity of top-k outputs. We introduce a hybrid teacher-forcing approach to avoid training and inference mismatch. We evaluate the proposed NAR on conversational SP datasets, TOP & TOPv2. Like the existing NAR models, we maintain the O(1) decoding time complexity while generating more diverse outputs and improving the top-3 exact match (EM) by 2.4 points. In comparison with AR models, our model speeds up beam search inference by 6.7 times on CPU with competitive top-k EM.