论文标题

快速的课堂更新以用于在线哈希

Fast Class-wise Updating for Online Hashing

论文作者

Lin, Mingbao, Ji, Rongrong, Sun, Xiaoshuai, Zhang, Baochang, Huang, Feiyue, Tian, Yonghong, Tao, Dacheng

论文摘要

在线图像哈希(Hashing)最近受到了越来越多的研究关注,该研究以流方式处理大规模数据,以更新即时更新哈希功能。为此,大多数现有的作品都在监督的环境下利用了此问题,即使用类标签来提高哈希性能,这遭受了适应性和效率的缺陷:首先,需要大量的培训批次才能学习最新的哈希功能,从而导致在线适应性差。其次,培训是耗时的,这与在线学习的核心需求相矛盾。在本文中,提议通过引入新颖有效的内部产品操作来解决以上两个挑战的新颖监督在线散列方案,该计划被称为“在线哈希(FCOH)”的快速班级更新(FCOH)。为了实现快速的在线适应性,开发了一种班级更新方法来分解二进制代码学习,并以班级方式更新哈希功能,这很好地解决了大量培训批次的负担。从数量上讲,这种分解进一步导致至少可以节省75%的存储空间。为了进一步实现在线效率,我们提出了半放大的优化,该优化通过独立处理不同的二进制约束来加速在线培训。没有其他约束和变量,时间复杂性将大大降低。在更新散列功能期间,此类方案也被定量显示,可以很好地保留过去的信息。我们已经定量证明,与各种最新方法相比,班级更新和半省略优化的集体努力提供了出色的性能,这通过在三个广泛使用的数据集中进行了广泛的实验来验证。

Online image hashing has received increasing research attention recently, which processes large-scale data in a streaming fashion to update the hash functions on-the-fly. To this end, most existing works exploit this problem under a supervised setting, i.e., using class labels to boost the hashing performance, which suffers from the defects in both adaptivity and efficiency: First, large amounts of training batches are required to learn up-to-date hash functions, which leads to poor online adaptivity. Second, the training is time-consuming, which contradicts with the core need of online learning. In this paper, a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH), is proposed to address the above two challenges by introducing a novel and efficient inner product operation. To achieve fast online adaptivity, a class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches. Quantitatively, such a decomposition further leads to at least 75% storage saving. To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently. Without additional constraints and variables, the time complexity is significantly reduced. Such a scheme is also quantitatively shown to well preserve past information during updating hashing functions. We have quantitatively demonstrated that the collective effort of class-wise updating and semi-relaxation optimization provides a superior performance comparing to various state-of-the-art methods, which is verified through extensive experiments on three widely-used datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源