论文标题

使用联邦学习评估抑郁症的隐私敏感语音分析

Privacy Sensitive Speech Analysis Using Federated Learning to Assess Depression

论文作者

BN, Suhas, Abdullah, Saeed

论文摘要

最近的研究使用语音信号来评估抑郁症。但是,语音功能可能导致严重的隐私问题。为了解决这些问题,先前的工作使用了保护隐私的语音功能。但是,使用一部分功能可以导致信息丢失,从而导致非最佳模型性能。此外,先前的工作依靠一种集中式方法来支持连续模型更新,从而带来了隐私风险。本文提议使用联邦学习(FL)来实现分散的,保护隐私的语音分析以评估抑郁症。使用现有的数据集(DAIC-WOZ),我们表明,与集中式方法相比,FL模型可以对抑郁症进行强大的抑郁评估,准确性损失仅为4--6%。这些模型还使用相同的数据集优于先前的工作。此外,FL模型的推理潜伏期短,并且记忆力较小,同时具有节能。因此,可以将这些模型部署在移动设备上,以进行实时,连续和隐私的抑郁评估。

Recent studies have used speech signals to assess depression. However, speech features can lead to serious privacy concerns. To address these concerns, prior work has used privacy-preserving speech features. However, using a subset of features can lead to information loss and, consequently, non-optimal model performance. Furthermore, prior work relies on a centralized approach to support continuous model updates, posing privacy risks. This paper proposes to use Federated Learning (FL) to enable decentralized, privacy-preserving speech analysis to assess depression. Using an existing dataset (DAIC-WOZ), we show that FL models enable a robust assessment of depression with only 4--6% accuracy loss compared to a centralized approach. These models also outperform prior work using the same dataset. Furthermore, the FL models have short inference latency and small memory footprints while being energy-efficient. These models, thus, can be deployed on mobile devices for real-time, continuous, and privacy-preserving depression assessment at scale.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源