论文标题

Semeval-2022任务4:探索变压器,卷积和经常性神经网络的简单合奏的UTSA NLP:

UTSA NLP at SemEval-2022 Task 4: An Exploration of Simple Ensembles of Transformers, Convolutional, and Recurrent Neural Networks

论文作者

Zhao, Xingmeng, Rios, Anthony

论文摘要

通过使用善良或有帮助的行为,但具有优越感和光顾语言的感觉可能对经历的人具有严重的心理健康影响。因此,在线检测这种屈尊和光顾的语言对于在线审核系统可能很有用。因此,在本手稿中,我们描述了由UTSA Semeval-2022 Task 4 Teass开发的系统,检测了光顾和屈服的语言。我们的方法探讨了包括罗伯塔,卷积神经网络和双向长期短期记忆网络在内的几种深度学习体系结构的使用。此外,我们探索了创建神经网络模型合奏的简单有效方法。总体而言,我们尝试了几个集合模型,发现五个罗伯塔模型的简单组合在开发数据集上达到了.6441的F得分为.6441,在最终测试数据集上实现了.5745。最后,我们还进行了全面的错误分析,以更好地了解模型的局限性并为进一步的研究提供了想法。

The act of appearing kind or helpful via the use of but having a feeling of superiority condescending and patronizing language can have have serious mental health implications to those that experience it. Thus, detecting this condescending and patronizing language online can be useful for online moderation systems. Thus, in this manuscript, we describe the system developed by Team UTSA SemEval-2022 Task 4, Detecting Patronizing and Condescending Language. Our approach explores the use of several deep learning architectures including RoBERTa, convolutions neural networks, and Bidirectional Long Short-Term Memory Networks. Furthermore, we explore simple and effective methods to create ensembles of neural network models. Overall, we experimented with several ensemble models and found that the a simple combination of five RoBERTa models achieved an F-score of .6441 on the development dataset and .5745 on the final test dataset. Finally, we also performed a comprehensive error analysis to better understand the limitations of the model and provide ideas for further research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源