论文标题

蒙特卡罗暹罗关于卫星演员的政策超级分辨率

Monte-Carlo Siamese Policy on Actor for Satellite Image Super Resolution

论文作者

Rout, Litu, Shah, Saumyaa, Moorthi, S Manthira, Dhar, Debajyoti

论文摘要

在过去的几年中,监督和对抗性学习已在各种复杂的计算机视觉任务中被广泛采用。似乎很自然地想知道,人工智能的另一个分支(通常称为强化学习(RL))是否可以使此类复杂的视力任务受益。在这项研究中,我们探讨了RL在遥感图像的超级分辨率中的合理用法。在最近的超级解决方案进步的指导下,我们提出了一个理论框架,该框架利用了监督和强化学习的好处。我们认为,由于动作变量尚不清楚,RL的直接实现不足以解决不足的超级分辨率。为了解决此问题,我们建议通过矩阵参数化操作变量,并使用Monte-Carlo采样来训练我们的策略网络。我们从理论和经验的角度研究了参数作用空间在无模型环境中的含义。此外,我们在遥感和非远程传感数据集上分析了定量和定性结果。根据我们的实验,我们通过将监督模型封装在增强学习框架中,报告了对最先进方法的大量改进。

In the past few years supervised and adversarial learning have been widely adopted in various complex computer vision tasks. It seems natural to wonder whether another branch of artificial intelligence, commonly known as Reinforcement Learning (RL) can benefit such complex vision tasks. In this study, we explore the plausible usage of RL in super resolution of remote sensing imagery. Guided by recent advances in super resolution, we propose a theoretical framework that leverages the benefits of supervised and reinforcement learning. We argue that a straightforward implementation of RL is not adequate to address ill-posed super resolution as the action variables are not fully known. To tackle this issue, we propose to parameterize action variables by matrices, and train our policy network using Monte-Carlo sampling. We study the implications of parametric action space in a model-free environment from theoretical and empirical perspective. Furthermore, we analyze the quantitative and qualitative results on both remote sensing and non-remote sensing datasets. Based on our experiments, we report considerable improvement over state-of-the-art methods by encapsulating supervised models in a reinforcement learning framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源