论文标题
通过人重新识别模型的时尚检索的强大基准
A Strong Baseline for Fashion Retrieval with Person Re-Identification Models
论文作者
论文摘要
时尚检索是找到图像中包含的时尚项目的确切匹配的具有挑战性的任务。困难来自服装的细粒性质,较大的课内和阶层间差异。此外,该任务的查询和源图像通常来自不同的域 - 街道照片和目录照片。由于这些差异,域之间存在质量,照明,对比度,背景混乱和项目表现的显着差距。结果,时尚检索是学术界和行业的积极研究领域。 受到人力重新识别研究的最新进步的启发,我们适应了领先的REID模型用于时尚检索任务。我们介绍了一个简单的基线模型,用于时尚检索,尽管架构要简单得多,但尽管架构要简单得多,但表现明显优于先前的最先进结果。我们在Street2shop和DeepFashion数据集上进行了深入的实验,并验证我们的结果。最后,我们提出了一种跨域(交叉数据库)评估方法,以测试时尚检索模型的鲁棒性。
Fashion retrieval is the challenging task of finding an exact match for fashion items contained within an image. Difficulties arise from the fine-grained nature of clothing items, very large intra-class and inter-class variance. Additionally, query and source images for the task usually come from different domains - street photos and catalogue photos respectively. Due to these differences, a significant gap in quality, lighting, contrast, background clutter and item presentation exists between domains. As a result, fashion retrieval is an active field of research both in academia and the industry. Inspired by recent advancements in Person Re-Identification research, we adapt leading ReID models to be used in fashion retrieval tasks. We introduce a simple baseline model for fashion retrieval, significantly outperforming previous state-of-the-art results despite a much simpler architecture. We conduct in-depth experiments on Street2Shop and DeepFashion datasets and validate our results. Finally, we propose a cross-domain (cross-dataset) evaluation method to test the robustness of fashion retrieval models.