论文标题

游戏中的有毒语言检测:共享任务和关注残留

In-game Toxic Language Detection: Shared Task and Attention Residuals

论文作者

Jia, Yuanzhe, Wu, Weixuan, Cao, Feiqi, Han, Soyeon Caren

论文摘要

游戏中的有毒语言成为游戏行业和社区中的热马铃薯。提出了几个在线游戏毒性分析框架和模型。但是,由于游戏中聊天的性质,检测毒性的毒性仍然很短,该毒性的长度非常短。在本文中,我们描述了如何使用现实世界中的游戏中聊天数据来建立游戏中的有毒语言共享任务。此外,我们提出并介绍了游戏中聊天中的有毒语言令牌标签(插槽填充)的模型/框架。相关代码可在GitHub上公开获得:https://github.com/yuanzhe-jia/in-game-toxic-detection

In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The relevant code is publicly available on GitHub: https://github.com/Yuanzhe-Jia/In-Game-Toxic-Detection

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源