论文标题

基于姿势的肢体语言识别情绪和精神病症状解释

Pose-based Body Language Recognition for Emotion and Psychiatric Symptom Interpretation

论文作者

Yang, Zhengyuan, Kay, Amanda, Li, Yuncheng, Cross, Wendi, Luo, Jiebo

论文摘要

受到人类从肢体语言推断情绪的能力的启发,我们提出了一个从常规RGB视频开始的基于肢体语言的情绪识别的自动框架。与心理学家合作,我们进一步扩展了精神病症状预测的框架。由于所提出的框架的特定应用程序域只能提供有限的数据,因此该框架旨在在小型培训集中工作并具有良好的可转移性。第一阶段的拟议系统基于从输入视频估算的人类姿势生成肢体语言预测的序列。在第二阶段,预测的序列被送入情绪解释和精神病症状预测的时间网络。我们首先验证了几个公共行动识别数据集上提出的肢体语言识别方法的准确性和可传递性。然后,我们在提议的URMC数据集上评估了框架,该数据集包括标准化患者与行为健康专业人员之间的对话,以及肢体语言,情感和潜在精神病症状的专家注释。所提出的框架优于URMC数据集上的其他方法。

Inspired by the human ability to infer emotions from body language, we propose an automated framework for body language based emotion recognition starting from regular RGB videos. In collaboration with psychologists, we further extend the framework for psychiatric symptom prediction. Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set and possess a good transferability. The proposed system in the first stage generates sequences of body language predictions based on human poses estimated from input videos. In the second stage, the predicted sequences are fed into a temporal network for emotion interpretation and psychiatric symptom prediction. We first validate the accuracy and transferability of the proposed body language recognition method on several public action recognition datasets. We then evaluate the framework on a proposed URMC dataset, which consists of conversations between a standardized patient and a behavioral health professional, along with expert annotations of body language, emotions, and potential psychiatric symptoms. The proposed framework outperforms other methods on the URMC dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源