论文标题

迈向以人为中心的解释性基准进行文本分类

Towards Human-Centred Explainability Benchmarks For Text Classification

论文作者

Schlegel, Viktor, Mendez-Guzman, Erick, Batista-Navarro, Riza

论文摘要

许多自然语言处理(NLP)任务(例如文本分类)的进展是由通过公开可用基准的客观,可重现和可扩展的评估驱动的。但是,这些并不总是代表使用文本分类器的现实情况,例如情感分析或错误信息检测。在该职位论文中,我们提出了两点,旨在减轻这个问题。首先,我们建议扩展文本分类基准,以评估文本分类器的解释性。我们审查了与客观评估产生有效解释的能力相关的挑战,这使我们提出了第二点:我们建议在以人为中心的应用中以这些基准为基础,例如使用社交媒体,游戏化或从人类判断中学习解释性指标。

Progress on many Natural Language Processing (NLP) tasks, such as text classification, is driven by objective, reproducible and scalable evaluation via publicly available benchmarks. However, these are not always representative of real-world scenarios where text classifiers are employed, such as sentiment analysis or misinformation detection. In this position paper, we put forward two points that aim to alleviate this problem. First, we propose to extend text classification benchmarks to evaluate the explainability of text classifiers. We review challenges associated with objectively evaluating the capabilities to produce valid explanations which leads us to the second main point: We propose to ground these benchmarks in human-centred applications, for example by using social media, gamification or to learn explainability metrics from human judgements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源