论文标题
GPPF:通过稀疏激活的多任务学习的一般感知预训练框架
GPPF: A General Perception Pre-training Framework via Sparsely Activated Multi-Task Learning
论文作者
论文摘要
在混合的多任务,多域和多模式数据上进行预训练仍然是视力感知预训练的开放挑战。在本文中,我们提出了GPPF,这是一个普遍的感知预训练框架,预先培训任务级动态网络,该网络由在标记的多任务和多域数据集上通过知识“乐高”组成。通过检查人类在复杂环境中学习的先天能力,我们将三个关键要素识别为深网:(1)同时暴露于每批批次中各种跨任务和跨域信息。 (2)由知识共享驱动的单独乐高单元中的分区知识存储。 (3)用于训练和下游任务的乐高单元子集的稀疏激活。值得注意的是,由于它们在输入形状,损耗功能,输出格式,数据分布等方面的差异,因此不同视力任务的联合培训是非平凡的。因此,我们在插件多任务训练算法上进行了创新,该训练算法支持单个迭代多个任务(SIMT)同时进行培训。 Simt用大型多任务多任务数据集为预训练的基础奠定了基础,并且被证明对于我们的GPPF实验中的稳定培训至关重要。令人兴奋的是,详尽的实验表明,我们的GPPF-R50模型在GPPF-15M中的8个预训练前任务的强大基线上实现了2.5-5.8的显着改善,并在22个具有相似计算预算的下游任务中收获了一系列SOTA。我们还验证了GPPF对SOTA视觉变压器的概括能力,并具有一致的改进。这些可靠的实验结果充分证明了我们新颖的GPPF框架提供的有效的知识学习,存储,共享和转移。
Pre-training over mixtured multi-task, multi-domain, and multi-modal data remains an open challenge in vision perception pre-training. In this paper, we propose GPPF, a General Perception Pre-training Framework, that pre-trains a task-level dynamic network, which is composed by knowledge "legos" in each layers, on labeled multi-task and multi-domain datasets. By inspecting humans' innate ability to learn in complex environment, we recognize and transfer three critical elements to deep networks: (1) simultaneous exposure to diverse cross-task and cross-domain information in each batch. (2) partitioned knowledge storage in separate lego units driven by knowledge sharing. (3) sparse activation of a subset of lego units for both pre-training and downstream tasks. Noteworthy, the joint training of disparate vision tasks is non-trivial due to their differences in input shapes, loss functions, output formats, data distributions, etc. Therefore, we innovatively develop a plug-and-play multi-task training algorithm, which supports Single Iteration Multiple Tasks (SIMT) concurrently training. SIMT lays the foundation of pre-training with large-scale multi-task multi-domain datasets and is proved essential for stable training in our GPPF experiments. Excitingly, the exhaustive experiments show that, our GPPF-R50 model achieves significant improvements of 2.5-5.8 over a strong baseline of the 8 pre-training tasks in GPPF-15M and harvests a range of SOTAs over the 22 downstream tasks with similar computation budgets. We also validate the generalization ability of GPPF to SOTA vision transformers with consistent improvements. These solid experimental results fully prove the effective knowledge learning, storing, sharing, and transfer provided by our novel GPPF framework.