论文标题
时空预测网络的无监督转移学习
Unsupervised Transfer Learning for Spatiotemporal Predictive Networks
论文作者
论文摘要
本文探讨了一个新的研究问题,即在多个时空预测任务中进行无监督的转移学习。与大多数专注于固定监督任务之间差异的转移学习方法不同,我们研究了如何将知识从不受欢迎的学习模型的动物园转移到另一个预测网络。我们的动机是,来自不同来源的模型有望从不同的角度理解复杂的时空动态,从而有效地补充了新任务,即使任务有足够的培训样本。从技术上讲,我们提出了一个名为可转移内存的可区分框架。它可以适应从多个预处理的RNN的记忆状态库中提取知识,并通过一种称为可转移记忆单元(TMU)的新型经常性结构将其应用于目标网络。与芬太尼相比,我们的方法可以对时空预测的三个基准进行重大改进,甚至从较不借口的基准中受益于目标任务。
This paper explores a new research problem of unsupervised transfer learning across multiple spatiotemporal prediction tasks. Unlike most existing transfer learning methods that focus on fixing the discrepancy between supervised tasks, we study how to transfer knowledge from a zoo of unsupervisedly learned models towards another predictive network. Our motivation is that models from different sources are expected to understand the complex spatiotemporal dynamics from different perspectives, thereby effectively supplementing the new task, even if the task has sufficient training samples. Technically, we propose a differentiable framework named transferable memory. It adaptively distills knowledge from a bank of memory states of multiple pretrained RNNs, and applies it to the target network via a novel recurrent structure called the Transferable Memory Unit (TMU). Compared with finetuning, our approach yields significant improvements on three benchmarks for spatiotemporal prediction, and benefits the target task even from less relevant pretext ones.