论文标题
用标签嵌入奖励的深度加固学习,以获得监督图像哈希
Deep Reinforcement Learning with Label Embedding Reward for Supervised Image Hashing
论文作者
论文摘要
深度哈希在图像检索和识别方面显示出令人鼓舞的结果。尽管它取得了成功,但大多数现有的深哈希方法还是相当相似的:多层感知器或CNN都应用于提取图像功能,然后使用不同的二进制激活功能,例如Sigmoid,Tanh或AutoCondoder生成二进制代码。在这项工作中,我们介绍了一种新颖的决策方法,用于深入监督哈希。我们提出了哈希问题作为跨二进制代码空间中的顶点的旅行,并通过Bose-Chaudhuri-Hocquenghem(BCH)代码定义的新型标签嵌入奖励来学习深层Q-Network,以探索最佳路径。对CIFAR-10和NUS范围范围的数据集进行了广泛的实验和分析表明,我们的方法在各种代码长度下都优于最先进的哈希方法。
Deep hashing has shown promising results in image retrieval and recognition. Despite its success, most existing deep hashing approaches are rather similar: either multi-layer perceptron or CNN is applied to extract image feature, followed by different binarization activation functions such as sigmoid, tanh or autoencoder to generate binary code. In this work, we introduce a novel decision-making approach for deep supervised hashing. We formulate the hashing problem as travelling across the vertices in the binary code space, and learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem (BCH) codes to explore the best path. Extensive experiments and analysis on the CIFAR-10 and NUS-WIDE dataset show that our approach outperforms state-of-the-art supervised hashing methods under various code lengths.