论文标题
FAKD:用于语义细分的功能增强知识蒸馏
FAKD: Feature Augmented Knowledge Distillation for Semantic Segmentation
论文作者
论文摘要
在这项工作中,我们探讨了用于语义分割知识蒸馏的数据增强。为了避免过度适合教师网络中的噪音,大量培训示例对于知识蒸馏至关重要。成像级论证技术(例如翻转,翻译或旋转)被广泛用于先前的知识蒸馏框架。受到功能空间上语义方向的最新进展的启发,我们建议在功能空间中包括有效蒸馏的功能空间。具体而言,给定语义方向,可以在特征空间中为学生获得无限数量的增强。此外,分析表明,可以通过最大程度地减少增强定义的损失的上限来同时优化这些增强。基于观察结果,开发了一种用于语义分割的知识蒸馏的新算法。对四个语义分割基准的广泛实验表明,所提出的方法可以提高当前知识蒸馏方法的性能而没有任何明显的开销。代码可在以下网址找到:https://github.com/jianlong-yuan/fakd。
In this work, we explore data augmentations for knowledge distillation on semantic segmentation. To avoid over-fitting to the noise in the teacher network, a large number of training examples is essential for knowledge distillation. Imagelevel argumentation techniques like flipping, translation or rotation are widely used in previous knowledge distillation framework. Inspired by the recent progress on semantic directions on feature-space, we propose to include augmentations in feature space for efficient distillation. Specifically, given a semantic direction, an infinite number of augmentations can be obtained for the student in the feature space. Furthermore, the analysis shows that those augmentations can be optimized simultaneously by minimizing an upper bound for the losses defined by augmentations. Based on the observation, a new algorithm is developed for knowledge distillation in semantic segmentation. Extensive experiments on four semantic segmentation benchmarks demonstrate that the proposed method can boost the performance of current knowledge distillation methods without any significant overhead. Code is available at: https://github.com/jianlong-yuan/FAKD.