论文标题
端到端学习可压缩功能
End-to-end Learning of Compressible Features
论文作者
论文摘要
预训练的卷积神经网络(CNN)是强大的现成功能发电机,已被证明在各种任务上都表现出色。不幸的是,生成的功能是高维度且存储昂贵的:处理视频时可能会有数十万个浮子。基于传统的基于熵的无损压缩方法几乎没有帮助,因为它们不会产生所需的压缩水平,而基于能量压实的通用损耗压缩方法(例如PCA,然后进行量化和熵编码)是最佳的,因为它们没有调整为任务特定目标。我们提出了一种学习的方法,该方法可以共同优化可压缩性以及学习功能的任务目标。我们方法的插件性质使得与任何目标目标和反对可压缩性的折衷都保持直率。我们在多个基准上提出了结果,并证明我们的方法产生的特征更可压缩,同时具有正则化效应,从而导致准确性的持续提高。
Pre-trained convolutional neural networks (CNNs) are powerful off-the-shelf feature generators and have been shown to perform very well on a variety of tasks. Unfortunately, the generated features are high dimensional and expensive to store: potentially hundreds of thousands of floats per example when processing videos. Traditional entropy based lossless compression methods are of little help as they do not yield desired level of compression, while general purpose lossy compression methods based on energy compaction (e.g. PCA followed by quantization and entropy coding) are sub-optimal, as they are not tuned to task specific objective. We propose a learned method that jointly optimizes for compressibility along with the task objective for learning the features. The plug-in nature of our method makes it straight-forward to integrate with any target objective and trade-off against compressibility. We present results on multiple benchmarks and demonstrate that our method produces features that are an order of magnitude more compressible, while having a regularization effect that leads to a consistent improvement in accuracy.