论文标题

3D零射门学习的及时引导场景生成

Prompt-guided Scene Generation for 3D Zero-Shot Learning

论文作者

Nasiri, Majid, Cheraghian, Ali, Chowdhury, Townim Faisal, Ahmadi, Sahar, Saberi, Morteza, Rahman, Shafin

论文摘要

与其2D图像对应物相比,3D点云上的零射击学习是一个相关的未置换问题。 3D数据为ZSL带来了新的挑战,这是由于不可用的预训练特征提取模型。为了解决这个问题,我们提出了一种迅速的3D场景生成和监督方法,该方法可以增强3D数据以更好地学习网络,从而探索可见和看不见的对象的复杂相互作用。首先,我们以提示描述的某些方式合并了两个3D模型的点云。提示的行为就像描述每个3D场景的注释一样。后来,我们进行对比学习,以端到端的方式培训我们所提出的建筑。我们认为,与单​​个对象相比,3D场景可以更有效地关联对象,因为当对象出现在上下文中时,流行的语言模型(如Bert)可以实现高性能。我们提出的及时引导场景生成方法封装了数据扩展和基于及时的注释/字幕,以提高3D ZSL性能。我们已经在合成(ModelNet40,ModelNet10)和实扫描(ScanoJbectnn)3D对象数据集上实现了最新的ZSL和广义ZSL性能。

Zero-shot learning on 3D point cloud data is a related underexplored problem compared to its 2D image counterpart. 3D data brings new challenges for ZSL due to the unavailability of robust pre-trained feature extraction models. To address this problem, we propose a prompt-guided 3D scene generation and supervision method that augments 3D data to learn the network better, exploring the complex interplay of seen and unseen objects. First, we merge point clouds of two 3D models in certain ways described by a prompt. The prompt acts like the annotation describing each 3D scene. Later, we perform contrastive learning to train our proposed architecture in an end-to-end manner. We argue that 3D scenes can relate objects more efficiently than single objects because popular language models (like BERT) can achieve high performance when objects appear in a context. Our proposed prompt-guided scene generation method encapsulates data augmentation and prompt-based annotation/captioning to improve 3D ZSL performance. We have achieved state-of-the-art ZSL and generalized ZSL performance on synthetic (ModelNet40, ModelNet10) and real-scanned (ScanOjbectNN) 3D object datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源