论文标题
自己涂鸦:班级学习通过绘制一些草图
Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches
论文作者
论文摘要
人类的视觉系统在仅在几个示例中学习新的视觉概念方面是显着的。这恰恰是几个阶级渐进学习(FSCIL)背后的目标,在该目标中,重点是确保模型不会遭受“忘记”的损失。在本文中,我们通过解决了两个关键问题,即瓶颈的无处不在应用程序(i)可以从不同的方式中学习,而不是仅仅照片(就像人类一样),以及(ii)如果照片不容易访问(由于道德和隐私约束),该模型可以从各种方式中学习。我们的关键创新在于提倡将草图用作班级支持的新方式。该产品是“自己的涂鸦”(DIY)FSCIL框架,用户可以自由地绘制一些新颖的示例,以便模型学会识别该类别的照片。为此,我们提出了一个注入域的框架(i)针对域不变学习的梯度共识,(ii)用于保留旧类信息的知识蒸馏以及(iii)图形注意网络,用于在旧类和新颖类之间传递的消息。我们通过实验表明,在FSCIL的背景下,草图比文本更好,这与素描文献中其他地方的发现相呼应。
The human visual system is remarkable in learning new visual concepts from just a few examples. This is precisely the goal behind few-shot class incremental learning (FSCIL), where the emphasis is additionally placed on ensuring the model does not suffer from "forgetting". In this paper, we push the boundary further for FSCIL by addressing two key questions that bottleneck its ubiquitous application (i) can the model learn from diverse modalities other than just photo (as humans do), and (ii) what if photos are not readily accessible (due to ethical and privacy constraints). Our key innovation lies in advocating the use of sketches as a new modality for class support. The product is a "Doodle It Yourself" (DIY) FSCIL framework where the users can freely sketch a few examples of a novel class for the model to learn to recognize photos of that class. For that, we present a framework that infuses (i) gradient consensus for domain invariant learning, (ii) knowledge distillation for preserving old class information, and (iii) graph attention networks for message passing between old and novel classes. We experimentally show that sketches are better class support than text in the context of FSCIL, echoing findings elsewhere in the sketching literature.