论文标题
综合学习,以实现强大而有效的密集预测
Composite Learning for Robust and Effective Dense Predictions
论文作者
论文摘要
多任务学习通过使用辅助任务共同优化目标任务,可以更好地对目标任务进行模型概括。但是,当前的实践需要为辅助任务进行额外的标签工作,同时不能保证更好的模型性能。在本文中,我们发现,通过自欺欺人(辅助)任务共同培训密集的预测(目标)任务可以始终如一地提高目标任务的性能,同时消除对辅助任务进行标记的需求。我们将这种联合培训称为综合学习(Compl)。关于单眼深度估计,语义分割和边界检测的求职实验显示了完全和部分标记的数据集的稳定性改进。对深度估计的进一步分析表明,与自学的联合培训优于标记最大的辅助任务。我们还发现,当在新域中评估模型时,COUMP可以提高模型鲁棒性。这些结果证明了自我判断作为一项辅助任务的好处,并建立了新型任务特定任务的自我监督方法的设计,作为对未来多任务学习研究的新调查轴。
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task. However, the current practice requires additional labeling efforts for the auxiliary task, while not guaranteeing better model performance. In this paper, we find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks. We refer to this joint training as Composite Learning (CompL). Experiments of CompL on monocular depth estimation, semantic segmentation, and boundary detection show consistent performance improvements in fully and partially labeled datasets. Further analysis on depth estimation reveals that joint training with self-supervision outperforms most labeled auxiliary tasks. We also find that CompL can improve model robustness when the models are evaluated in new domains. These results demonstrate the benefits of self-supervision as an auxiliary task, and establish the design of novel task-specific self-supervised methods as a new axis of investigation for future multi-task learning research.