论文标题

Xmixup:通过跨域混合使用辅助样品的有效传输学习

XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-domain Mixup

论文作者

Li, Xingjian, Xiong, Haoyi, An, Haozhe, Xu, Chengzhong, Dou, Dejing

论文摘要

从大型源数据集中传输知识是一种有效的方法,可以用较小的样本量来调整目标任务的深神经网络。已经提出了许多算法来促进深度转移学习,并且这些技术通常可以分为两组 - 使用已从源数据集进行了预训练的模型对目标任务进行正则学习,并使用源和目标数据集进行多任务学习以训练共享的骨干神经网络。在这项工作中,我们旨在改善通过跨域混合(XMIXUP)进行深层转移学习的多任务范式。尽管现有的多任务学习算法需要在源数据集和目标数据集上运行反向传播,并且通常会消耗较高的梯度复杂性,但Xmixup将知识从源转移到目标任务:对于目标任务的每个类别,XMIXUP都可以通过量化量级的样本选择辅助样品,并通过简单的样品进行培训样品。我们评估了六个现实世界转移学习数据集的Xmixup。实验结果表明,XMIXUP平均将准确性提高了1.9%。与其他最先进的转移学习方法相比,Xmixup的培训时间更少,同时仍获得更高的准确性。

Transferring knowledge from large source datasets is an effective way to fine-tune the deep neural networks of the target task with a small sample size. A great number of algorithms have been proposed to facilitate deep transfer learning, and these techniques could be generally categorized into two groups - Regularized Learning of the target task using models that have been pre-trained from source datasets, and Multitask Learning with both source and target datasets to train a shared backbone neural network. In this work, we aim to improve the multitask paradigm for deep transfer learning via Cross-domain Mixup (XMixup). While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy. We evaluate XMixup over six real world transfer learning datasets. Experiment results show that XMixup improves the accuracy by 1.9% on average. Compared with other state-of-the-art transfer learning approaches, XMixup costs much less training time while still obtains higher accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源