论文标题

但是在Semeval-2020任务5:使用深度训练的语言表示模型自动检测反事实陈述

BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual statements with deep pre-trained language representation models

论文作者

Fajcik, Martin, Jon, Josef, Docekal, Martin, Smrz, Pavel

论文摘要

本文描述了But-Fit在Semeval-2020任务5:在语言中建模因果推理:检测反事实。挑战的重点是检测给定语句是否包含反事实(子任务1),并从文本中提取反事实的前提和结果部分(子任务2)。我们尝试了各种最先进的语言表示模型(LRMS)。我们发现Roberta LRM在两个子任务中都表现最好。我们在子任务2中获得了精确匹配和F1的第一名,并在子任务1中排名第二。

This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused on detecting whether a given statement contains a counterfactual (Subtask 1) and extracting both antecedent and consequent parts of the counterfactual from the text (Subtask 2). We experimented with various state-of-the-art language representation models (LRMs). We found RoBERTa LRM to perform the best in both subtasks. We achieved the first place in both exact match and F1 for Subtask 2 and ranked second for Subtask 1.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源