论文标题

专注于多任务元学习的重复使用

Attentive Feature Reuse for Multi Task Meta learning

论文作者

Lekkala, Kiran, Itti, Laurent

论文摘要

我们开发了用于同时学习多个任务(例如,图像分类,深度估计)的新算法,并用于适应这些高级任务(例如不同环境)中的任务/域分布。首先,我们学习所有任务基础的共同表示。然后,我们为每个任务提出了一种注意机制,以动态地专门针对网络。我们的方法基于将骨干网络的每个功能图加权,基于其与特定任务的相关性。为了实现这一目标,我们使注意模块能够在训练过程中学习任务表示,这些任务表示为获得注意力。我们的方法改善了新的,以前看不见的环境的性能,并且比使用类似体系结构的标准现有元学习方法快1.5倍。我们重点介绍了4个任务的多任务元学习(图像分类,深度,消失点和表面正常估计)的性能改进,每个任务都超过10至25个测试域/环境,这是通过MAML(例如MAML)无法实现的结果。

We develop new algorithms for simultaneous learning of multiple tasks (e.g., image classification, depth estimation), and for adapting to unseen task/domain distributions within those high-level tasks (e.g., different environments). First, we learn common representations underlying all tasks. We then propose an attention mechanism to dynamically specialize the network, at runtime, for each task. Our approach is based on weighting each feature map of the backbone network, based on its relevance to a particular task. To achieve this, we enable the attention module to learn task representations during training, which are used to obtain attention weights. Our method improves performance on new, previously unseen environments, and is 1.5x faster than standard existing meta learning methods using similar architectures. We highlight performance improvements for Multi-Task Meta Learning of 4 tasks (image classification, depth, vanishing point, and surface normal estimation), each over 10 to 25 test domains/environments, a result that could not be achieved with standard meta learning techniques like MAML.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源