论文标题

部分可观测时空混沌系统的无模型预测

From heavy rain removal to detail restoration: A faster and better network

论文作者

Wen, Yuanbo, Gao, Tao, Zhang, Jing, Zhang, Kaihao, Chen, Ting

论文摘要

在激烈的降雨事件期间降水的深刻积累会显着降低图像的质量,从而导致质地细节的侵蚀。尽管专门用于清除大雨的现有基于学习的方法有所改善,但发现这些方法中的很大一部分倾向于忽略复杂细节的精确重建。在这项工作中,我们引入了一个简单的双阶段渐进式增强网络,称为DPENET,旨在实现有效的降低,同时保持无雨图像的结构准确性。这种方法包括两个关键模块,一个重点降雨的雨条去除网络(r $^2 $ net),以及旨在恢复无雨图像的纹理细节的细节重建网络(DRNET)。首先,我们在r $^2 $ net中引入了扩张的密集残留块(DDRB),从而使高级和低级功能的聚合能够聚集。其次,将增强的残留像素注意力块(ERPAB)集成到DRNET中,以促进上下文信息的结合。为了进一步提高我们的方法的忠诚度,我们采用了全面的损失功能,可以突出无雨图像的边际和区域准确性。对公开基准进行的广泛实验表明,我们提出的DPENET的效率和有效性值得注意。源代码和预训练模型当前可在\ url {https://github.com/chdwyb/dpenet}上找到。

The profound accumulation of precipitation during intense rainfall events can markedly degrade the quality of images, leading to the erosion of textural details. Despite the improvements observed in existing learning-based methods specialized for heavy rain removal, it is discerned that a significant proportion of these methods tend to overlook the precise reconstruction of the intricate details. In this work, we introduce a simple dual-stage progressive enhancement network, denoted as DPENet, aiming to achieve effective deraining while preserving the structural accuracy of rain-free images. This approach comprises two key modules, a rain streaks removal network (R$^2$Net) focusing on accurate rain removal, and a details reconstruction network (DRNet) designed to recover the textural details of rain-free images. Firstly, we introduce a dilated dense residual block (DDRB) within R$^2$Net, enabling the aggregation of high-level and low-level features. Secondly, an enhanced residual pixel-wise attention block (ERPAB) is integrated into DRNet to facilitate the incorporation of contextual information. To further enhance the fidelity of our approach, we employ a comprehensive loss function that accentuates both the marginal and regional accuracy of rain-free images. Extensive experiments conducted on publicly available benchmarks demonstrates the noteworthy efficiency and effectiveness of our proposed DPENet. The source code and pre-trained models are currently available at \url{https://github.com/chdwyb/DPENet}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源