论文标题

通过自主机器人实验者的实时映射物理场景属性

Real-time Mapping of Physical Scene Properties with an Autonomous Robot Experimenter

论文作者

Haughton, Iain, Sucar, Edgar, Mouton, Andre, Johns, Edward, Davison, Andrew J.

论文摘要

可以从头开始训练神经场,以有效地表示3D场景的形状和外观。还已经证明,它们可以通过人类标记器的稀疏相互作用来密集地绘制相关的属性,例如语义。在这项工作中,我们表明机器人可以通过自己的完全自主的实验相互作用来密集地注释具有任意离散或连续物理属性的场景,因为它同时使用RGB-D摄像头扫描和映射了它。可以进行各种场景相互作用,包括用力传感来确定刚性,以单像素光谱法测量局部材料类型或通过推动来预测力分布。稀疏的实验相互作用是通过熵引导的,以实现高效率,桌面场景属性在几十分钟内从刮擦中密集映射到了几十个相互作用。

Neural fields can be trained from scratch to represent the shape and appearance of 3D scenes efficiently. It has also been shown that they can densely map correlated properties such as semantics, via sparse interactions from a human labeller. In this work, we show that a robot can densely annotate a scene with arbitrary discrete or continuous physical properties via its own fully-autonomous experimental interactions, as it simultaneously scans and maps it with an RGB-D camera. A variety of scene interactions are possible, including poking with force sensing to determine rigidity, measuring local material type with single-pixel spectroscopy or predicting force distributions by pushing. Sparse experimental interactions are guided by entropy to enable high efficiency, with tabletop scene properties densely mapped from scratch in a few minutes from a few tens of interactions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源