论文标题

语义解析的域改编

Domain Adaptation for Semantic Parsing

论文作者

Li, Zechang, Lai, Yuxuan, Feng, Yansong, Zhao, Dongyan

论文摘要

最近,语义解析引起了社区的广泛关注。尽管许多神经建模工作大大改善了表现,但它仍然遇到了数据稀缺问题。在本文中,我们提出了一个用于域适应性的新型语义解析器,与源域相比,目标域中的注释数据更少。我们的语义解析器受益于两个阶段的粗到1点框架,因此可以为这两个阶段提供不同和准确的治疗方法,即分别关注域不变和域特定信息。在粗糙的阶段,我们的新型领域歧视成分和域相关性的注意力鼓励模型学习可转移的域一般结构。在罚款阶段,该模型被指导着集中于域相关的细节。基准数据集上的实验表明,我们的方法始终优于几个流行的域适应策略。此外,我们表明我们的模型可以很好地利用有限的目标数据来捕获源和目标域之间的差异,即使目标域的训练实例却少得多。

Recently, semantic parsing has attracted much attention in the community. Although many neural modeling efforts have greatly improved the performance, it still suffers from the data scarcity issue. In this paper, we propose a novel semantic parser for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain. Our semantic parser benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages, i.e., focusing on domain invariant and domain specific information, respectively. In the coarse stage, our novel domain discrimination component and domain relevance attention encourage the model to learn transferable domain general structures. In the fine stage, the model is guided to concentrate on domain related details. Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies. Additionally, we show that our model can well exploit limited target data to capture the difference between the source and target domain, even when the target domain has far fewer training instances.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源