论文标题

基于视觉的活动识别与自闭症相关行为的儿童

Vision-Based Activity Recognition in Children with Autism-Related Behaviors

论文作者

Wei, Pengbo, Ahmedt-Aristizabal, David, Gammulle, Harshala, Denman, Simon, Armin, Mohammad Ali

论文摘要

机器学习和非接触式传感器的进步使您能够在医疗保健环境中理解复杂的人类行为。特别是,已经引入了几种深度学习系统,以实现对自闭症谱系障碍(ASD)等神经发展状况的全面分析。这种情况会影响儿童的早期发育阶段,并且诊断完全依赖于观察孩子的行为和检测行为提示。但是,诊断过程是耗时的,因为它需要长期的行为观察以及专家的稀缺性。我们演示了基于区域的计算机视觉系统的效果,以帮助临床医生和父母分析孩子的行为。为此,我们采用并增强了一个数据集,用于使用在不受控制的环境中捕获的儿童的视频来分析自闭症相关的动作(例如,在各种环境中使用消费级摄像机收集的视频)。通过检测视频中的目标儿童以减少背景噪声的影响来预处理数据。在时间卷积模型的有效性的推动下,我们提出了能够从视频帧中提取动作特征并通过分析视频中的框架之间的关系来从视频帧中提取动作特征的动作特征并分类与自闭症相关的行为。通过对功能提取和学习策略的广泛评估,我们证明了通过膨胀的3D Convnet和多阶段时间卷积网络实现最佳性能,实现了与三种自闭症相关动作分类的0.83加权F1次数,以优于现有方法。我们还通过在同一系统中采用ESNET主链来提出一个轻重量解决方案,实现0.71加权F1得分的竞争结果,并在嵌入式系统上实现潜在的部署。

Advances in machine learning and contactless sensors have enabled the understanding complex human behaviors in a healthcare setting. In particular, several deep learning systems have been introduced to enable comprehensive analysis of neuro-developmental conditions such as Autism Spectrum Disorder (ASD). This condition affects children from their early developmental stages onwards, and diagnosis relies entirely on observing the child's behavior and detecting behavioral cues. However, the diagnosis process is time-consuming as it requires long-term behavior observation, and the scarce availability of specialists. We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior. For this purpose, we adopt and enhance a dataset for analyzing autism-related actions using videos of children captured in uncontrolled environments (e.g. videos collected with consumer-grade cameras, in varied environments). The data is pre-processed by detecting the target child in the video to reduce the impact of background noise. Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames and classifying autism-related behaviors by analyzing the relationships between frames in a video. Through extensive evaluations on the feature extraction and learning strategies, we demonstrate that the best performance is achieved with an Inflated 3D Convnet and Multi-Stage Temporal Convolutional Networks, achieving a 0.83 Weighted F1-score for classification of the three autism-related actions, outperforming existing methods. We also propose a light-weight solution by employing the ESNet backbone within the same system, achieving competitive results of 0.71 Weighted F1-score, and enabling potential deployment on embedded systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源