论文标题
要在新的视觉域中识别新的语义概念
Towards Recognizing New Semantic Concepts in New Visual Domains
论文作者
论文摘要
深度学习模型在很大程度上依靠大规模注释的数据集进行培训。不幸的是,数据集无法捕获现实世界的无限变异性,因此神经网络本质上受到训练集中包含的视觉和语义信息的限制。在这篇论文中,我们认为设计可以在以前看不见的视觉域并识别新颖的语义概念的深层体系结构至关重要。在论文的第一部分中,我们通过将知识从标记的源域(S)转移到没有标记的数据可用的域(目标)来描述不同的解决方案,以使深层模型概括为新的视觉域。我们将展示如何将批准差异化(BN)的变体应用于不同的情况,从当源和目标是多个潜在域的混合物到域的概括,连续域的适应和预测域的适应性时,有关目标域仅以Metadata的形式获得有关目标域的信息。在论文的第二部分中,我们展示了如何将验证的深层模型的知识扩展到新的语义概念,而无需访问原始训练集。我们使用转换的特定于任务的二进制掩码,开放世界识别,端到端训练和强制群集以及语义细分中的增量类学习来解决连续多任务学习的方案,我们强调并解决了背景类别语义转移的问题。在最后一部分中,我们解决了一个更具挑战性的问题:给定多个域和语义类别的图像(及其属性),如何构建一个模型,该模型识别在看不见的域中看不见概念的图像?我们还提出了一种基于域和输入和特征的语义混合的方法,这是解决此问题的第一步。
Deep learning models heavily rely on large scale annotated datasets for training. Unfortunately, datasets cannot capture the infinite variability of the real world, thus neural networks are inherently limited by the restricted visual and semantic information contained in their training set. In this thesis, we argue that it is crucial to design deep architectures that can operate in previously unseen visual domains and recognize novel semantic concepts. In the first part of the thesis, we describe different solutions to enable deep models to generalize to new visual domains, by transferring knowledge from a labeled source domain(s) to a domain (target) where no labeled data are available. We will show how variants of batch-normalization (BN) can be applied to different scenarios, from domain adaptation when source and target are mixtures of multiple latent domains, to domain generalization, continuous domain adaptation, and predictive domain adaptation, where information about the target domain is available only in the form of metadata. In the second part of the thesis, we show how to extend the knowledge of a pretrained deep model to new semantic concepts, without access to the original training set. We address the scenarios of sequential multi-task learning, using transformed task-specific binary masks, open-world recognition, with end-to-end training and enforced clustering, and incremental class learning in semantic segmentation, where we highlight and address the problem of the semantic shift of the background class. In the final part, we tackle a more challenging problem: given images of multiple domains and semantic categories (with their attributes), how to build a model that recognizes images of unseen concepts in unseen domains? We also propose an approach based on domain and semantic mixing of inputs and features, which is a first, promising step towards solving this problem.