论文标题

通过远处的监督学习合理化非单调推理

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

论文作者

Brahman, Faeze, Shwartz, Vered, Rudinger, Rachel, Choi, Yejin

论文摘要

神经模型的黑盒性质激发了一系列研究,旨在产生自然语言原理,以解释模型为何做出某些预测的原因。迄今为止,此类理由生成模型已在数据集特定的众包理由上进行了培训,但是这种方法是昂贵的,并且不可推广到新任务和领域。在本文中,我们研究了神经模型可以在多大程度上推理自然语言理由的解释模型预测,仅依靠遥远的监督,而没有针对人写的理由的额外注释成本。我们研究了多种方法,可以使用预训练的语言模型,神经知识模型和与相关任务的遥远监督以及能够为看不见的实例构成解释性理由的火车生成模型自动生成理由。我们证明了我们在不避免的推理任务上的方法,当引入新信息(更新)时,可以加强推理或削弱推理的非单调推理任务。我们的模型显示出产生事后理由的承诺,解释了鉴于其他信息,为什么推断或多或少的可能性可能会产生反映神经语言模型的基本局限性的微不足道的理由。相反,共同预测更新或其类型和产生理由的更现实的设置更具挑战性,这表明了一个重要的未来方向。

The black-box nature of neural models has motivated a line of research that aims to generate natural language rationales to explain why a model made certain predictions. Such rationale generation models, to date, have been trained on dataset-specific crowdsourced rationales, but this approach is costly and is not generalizable to new tasks and domains. In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales. We investigate multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and train generative models capable of composing explanatory rationales for unseen instances. We demonstrate our approach on the defeasible inference task, a nonmonotonic reasoning task in which an inference may be strengthened or weakened when new information (an update) is introduced. Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information, however, it mostly generates trivial rationales reflecting the fundamental limitations of neural language models. Conversely, the more realistic setup of jointly predicting the update or its type and generating rationale is more challenging, suggesting an important future direction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源