论文标题

深度转换和度量学习网络:婚礼深层词典学习和神经网络

Deep Transform and Metric Learning Network: Wedding Deep Dictionary Learning and Neural Networks

论文作者

Tang, Wen, Chouzenoux, Emilie, Pesquet, Jean-Christophe, Krim, Hamid

论文摘要

由于其在推理任务和降级应用方面取得了许多成功,字典学习(DL)及其相关的稀疏优化问题引起了很多研究的兴趣。尽管大多数解决方案都集中在单层词典上,但最近提出的深入DL(DDL)方法的改进也缺乏许多问题。我们在此提出了一种新型的DDL方法,其中每个DL层都可以作为一个线性层和一个复发性神经网络(RNN)的组合。 RNN被证明可以灵活地解释与层相关和学习的度量。我们提议的工作揭示了对神经网络和DDL的新见解,并提供了一种新的,高效和竞争的方法,可以共同学习对推理应用的深层转换和指标。进行了广泛的实验,以证明所提出的方法不仅可以优于现有的DDL,而且还能超过现有的DDL,而且可以超过最先进的通用CNN。

On account of its many successes in inference tasks and denoising applications, Dictionary Learning (DL) and its related sparse optimization problems have garnered a lot of research interest. While most solutions have focused on single layer dictionaries, the improved recently proposed Deep DL (DDL) methods have also fallen short on a number of issues. We propose herein, a novel DDL approach where each DL layer can be formulated as a combination of one linear layer and a Recurrent Neural Network (RNN). The RNN is shown to flexibly account for the layer-associated and learned metric. Our proposed work unveils new insights into Neural Networks and DDL and provides a new, efficient and competitive approach to jointly learn a deep transform and a metric for inference applications. Extensive experiments are carried out to demonstrate that the proposed method can not only outperform existing DDL but also state-of-the-art generic CNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源