论文标题
通过域和特征幻觉的联合和广义的人重新识别
Federated and Generalized Person Re-identification through Domain and Feature Hallucinating
论文作者
论文摘要
在本文中,我们研究了对人员重新识别(RE-ID)联合域泛化(FedDG)的问题,该问题旨在学习具有多个分散标记的源域的通用模型。经验方法(FedAvg)单独训练本地模型,并平均它们获取全局模型,以进一步在看不见的目标域中进行局部微调或部署。 FedAvg的一个缺点是在本地培训期间忽略了其他客户的数据分布,使本地模型过度拟合本地数据并产生了差不多的全球模型。为了解决这个问题,我们提出了一种新颖的方法,称为“域和特征幻觉(DFH)”,以为学习通用的本地和全球模型产生各种特征。具体来说,在每个模型聚合过程之后,我们在不同客户端之间共享域级特征统计信息(DFS),而无需违反数据隐私。在本地培训期间,DFS用于将新型域统计数据与所提出的域幻觉合成,这是通过重新加权DFS随机权重来实现的。然后,我们提出幻觉,以通过缩放和将其转移到获得的新域的分布来使局部特征多样化。合成的小说功能保留了原始的配对相似性,使我们能够利用它们以监督的方式优化模型。广泛的实验验证了提出的DFH可以有效地提高全局模型的概括能力。我们的方法在四个大规模重新ID基准测试中实现了FedDG的最新性能。
In this paper, we study the problem of federated domain generalization (FedDG) for person re-identification (re-ID), which aims to learn a generalized model with multiple decentralized labeled source domains. An empirical method (FedAvg) trains local models individually and averages them to obtain the global model for further local fine-tuning or deploying in unseen target domains. One drawback of FedAvg is neglecting the data distributions of other clients during local training, making the local model overfit local data and producing a poorly-generalized global model. To solve this problem, we propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models. Specifically, after each model aggregation process, we share the Domain-level Feature Statistics (DFS) among different clients without violating data privacy. During local training, the DFS are used to synthesize novel domain statistics with the proposed domain hallucinating, which is achieved by re-weighting DFS with random weights. Then, we propose feature hallucinating to diversify local features by scaling and shifting them to the distribution of the obtained novel domain. The synthesized novel features retain the original pair-wise similarities, enabling us to utilize them to optimize the model in a supervised manner. Extensive experiments verify that the proposed DFH can effectively improve the generalization ability of the global model. Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.