论文标题

移动:无监督的可移动对象细分和检测

MOVE: Unsupervised Movable Object Segmentation and Detection

论文作者

Bielski, Adam, Favaro, Paolo

论文摘要

我们介绍了Move,这是一种新的方法,可以分割对象而无需任何形式的监督。移动利用了一个事实,即可以将前景对象相对于其初始位置进行本地移动,并导致现实(未呈现)新图像。该属性使我们能够在不注释的图像数据集上训练分割模型,并在几个评估数据集上实现最新的最新效果(SOTA)性能,以进行无监督的显着对象检测和分割。在无监督的单一对象发现中,Move比SOTA的平均CORLOC提高了7.2%,而在无监督的类无术对象检测中,平均AP的相对AP平均提高了53%。我们的方法建立在自我监督的功能(例如Dino或Mae),介入网络(基于蒙版自动编码器)和对抗性培训之上。

We introduce MOVE, a novel method to segment objects without any form of supervision. MOVE exploits the fact that foreground objects can be shifted locally relative to their initial position and result in realistic (undistorted) new images. This property allows us to train a segmentation model on a dataset of images without annotation and to achieve state of the art (SotA) performance on several evaluation datasets for unsupervised salient object detection and segmentation. In unsupervised single object discovery, MOVE gives an average CorLoc improvement of 7.2% over the SotA, and in unsupervised class-agnostic object detection it gives a relative AP improvement of 53% on average. Our approach is built on top of self-supervised features (e.g. from DINO or MAE), an inpainting network (based on the Masked AutoEncoder) and adversarial training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源