论文标题
神经语言模型中的蒙塔古语义和修饰符的一致性测量
Montague semantics and modifier consistency measurement in neural language models
论文作者
论文摘要
这项工作提出了一种新的方法,用于测量当代语言嵌入模型中的组成行为。具体而言,我们专注于形容词名称短语中的形容词修饰符现象。近年来,分销语言表示模型已显示出巨大的实践成功。同时,对解释性的需求引发了有关其内在属性和能力的问题。至关重要的是,在处理自然语言的组成现象时,分布模型通常是不一致的,这对它们的安全性和公平具有重要意义。尽管如此,目前关于组成性的大多数研究旨在提高其在相似性任务上的绩效。这项工作采用了不同的方法,引入了三种受蒙塔古语义启发的组成行为的新型测试。我们的实验结果表明,当前的神经语言模型并未根据预期的语言理论行事。这表明当前的语言模型可能缺乏捕获我们在有限上下文中评估的语义属性的能力,或者蒙大哥维亚传统的语言理论可能与分布模型的预期能力不符。
This work proposes a novel methodology for measuring compositional behavior in contemporary language embedding models. Specifically, we focus on adjectival modifier phenomena in adjective-noun phrases. In recent years, distributional language representation models have demonstrated great practical success. At the same time, the need for interpretability has elicited questions on their intrinsic properties and capabilities. Crucially, distributional models are often inconsistent when dealing with compositional phenomena in natural language, which has significant implications for their safety and fairness. Despite this, most current research on compositionality is directed towards improving their performance on similarity tasks only. This work takes a different approach, introducing three novel tests of compositional behavior inspired by Montague semantics. Our experimental results indicate that current neural language models do not behave according to the expected linguistic theories. This indicates that current language models may lack the capability to capture the semantic properties we evaluated on limited context, or that linguistic theories from Montagovian tradition may not match the expected capabilities of distributional models.