论文标题
Sketchembednet:通过模仿图纸学习新颖概念
SketchEmbedNet: Learning Novel Concepts by Imitating Drawings
论文作者
论文摘要
草图图捕获了视觉概念的显着信息。先前的工作表明,神经网络能够产生从少数类中绘制的自然对象的草图。虽然较早的方法侧重于发电质量或检索,但我们探讨了通过训练模型来生成图像草图的图像表示的属性。我们表明,这种生成的,无形的模型可在几次设置中从新颖的示例,类,甚至新颖的数据集中产生图像的信息嵌入。此外,我们发现这些学到的表示表现出有趣的结构和组成性。
Sketch drawings capture the salient information of visual concepts. Previous work has shown that neural networks are capable of producing sketches of natural objects drawn from a small number of classes. While earlier approaches focus on generation quality or retrieval, we explore properties of image representations learned by training a model to produce sketches of images. We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting. Additionally, we find that these learned representations exhibit interesting structure and compositionality.