论文标题

身份引导的人类语义解析人重新识别

Identity-Guided Human Semantic Parsing for Person Re-Identification

论文作者

Zhu, Kuan, Guo, Haiyun, Liu, Zhiwei, Tang, Ming, Wang, Jinqiao

论文摘要

现有的基于一致性的方法必须采用预验证的人类解析模型来实现像素级的对齐,并且无法识别对人重新ID至关重要的个人物品(例如背包和网状)。在本文中,我们提出了身份指导的人类语义解析方法(ISP),以将人体部分和个人物品定位在像素级别,仅与人身份标签重新统一。我们在特征地图上设计了级联的聚类,以生成人类部位的伪标签。具体而言,对于一个人的所有图像的像素,我们首先将它们分组为前景或背景,然后将前景像素分组为人类部位。群集分配随后被用作人类部位的伪标签,以监督零件估计,并迭代地学习特征图和分组。最后,根据自我零件的估计获得了人体部位和个人物品的局部特征,并且仅将可见零件的特征用于检索。在三个广泛使用的数据集上进行的广泛实验验证了ISP优于许多最新方法。我们的代码可在https://github.com/casia-iva-lab/isp-reid上找到。

Existing alignment-based methods have to employ the pretrained human parsing models to achieve the pixel-level alignment, and cannot identify the personal belongings (e.g., backpacks and reticule) which are crucial to person re-ID. In this paper, we propose the identity-guided human semantic parsing approach (ISP) to locate both the human body parts and personal belongings at pixel-level for aligned person re-ID only with person identity labels. We design the cascaded clustering on feature maps to generate the pseudo-labels of human parts. Specifically, for the pixels of all images of a person, we first group them to foreground or background and then group the foreground pixels to human parts. The cluster assignments are subsequently used as pseudo-labels of human parts to supervise the part estimation and ISP iteratively learns the feature maps and groups them. Finally, local features of both human body parts and personal belongings are obtained according to the selflearned part estimation, and only features of visible parts are utilized for the retrieval. Extensive experiments on three widely used datasets validate the superiority of ISP over lots of state-of-the-art methods. Our code is available at https://github.com/CASIA-IVA-Lab/ISP-reID.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源