论文标题

具有架构增强和控制机制的优化启发性学习,用于低级视觉

Optimization-Inspired Learning with Architecture Augmentations and Control Mechanisms for Low-Level Vision

论文作者

Liu, Risheng, Liu, Zhu, Mu, Pan, Fan, Xin, Luo, Zhongxuan

论文摘要

近年来,人们对将可学习的模块与数值优化相结合以解决低级视力任务的兴趣越来越大。但是,大多数现有方法都集中在设计专业方案以生成图像/特征传播。缺乏统一的考虑来构建传播模块,提供理论分析工具和设计有效的学习机制。为了减轻上述问题,本文提出了一个统一的优化启发的学习框架,以汇总具有强大泛化的不同优化模型的生成,歧视性和纠正性(用于简短的)原则。具体而言,通过引入一般的能量最小化模型并从不同的观点(即基于判别度的度量和基于最佳性的校正)从不同的观点(即以生成方式)制定其下降方向,我们构建了三个传播模块,以有效地以灵活的结合固l优化模型。我们设计了两种控制机制,可为完全定义和部分定义的优化配方提供非平凡的理论保证。在理论保证的支持下,我们可以引入各种体系结构增强策略,例如归一化和搜索,以确保稳定的繁殖与收敛性,并分别将合适的模块无缝地整合到传播中。跨不同低级视力任务进行的广泛实验验证了GDC的功效和适应性。这些代码可从https://github.com/liuzhu-cv/gdc-optimizationning获得

In recent years, there has been a growing interest in combining learnable modules with numerical optimization to solve low-level vision tasks. However, most existing approaches focus on designing specialized schemes to generate image/feature propagation. There is a lack of unified consideration to construct propagative modules, provide theoretical analysis tools, and design effective learning mechanisms. To mitigate the above issues, this paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC for short) principles with strong generalization for diverse optimization models. Specifically, by introducing a general energy minimization model and formulating its descent direction from different viewpoints (i.e., in a generative manner, based on the discriminative metric and with optimality-based correction), we construct three propagative modules to effectively solve the optimization models with flexible combinations. We design two control mechanisms that provide the non-trivial theoretical guarantees for both fully- and partially-defined optimization formulations. Under the support of theoretical guarantees, we can introduce diverse architecture augmentation strategies such as normalization and search to ensure stable propagation with convergence and seamlessly integrate the suitable modules into the propagation respectively. Extensive experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC. The codes are available at https://github.com/LiuZhu-CV/GDC-OptimizationLearning

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源