论文标题
实时触觉纹理渲染的基于学习的模型的开发和评估
Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering
论文作者
论文摘要
当前的虚拟现实(VR)环境缺乏人类在现实生活中经历的丰富触觉信号,例如表面横向运动期间纹理的感觉。在VR环境中添加逼真的触觉纹理需要一个模型,该模型将用户的交互以及世界上各种现有纹理的变化进行概括。目前存在触觉纹理渲染的方法,但它们通常每个纹理开发一个模型,从而导致低可扩展性。我们提出了一种基于学习的动作条件模型,用于触觉纹理渲染,并通过多部分用户研究来评估其感知性能在呈现现实的纹理振动方面。该模型在所有材料上都统一,并使用来自基于视觉的触觉传感器(Gelsight)的数据来实时在用户的动作上进行适当的表面。为了呈现纹理,我们使用3D Systems Touch设备上的高带宽颤音式传感器。我们的用户研究的结果表明,我们基于学习的方法比最先进的方法创建具有可比或更好质量的高频纹理效果图,而无需每个纹理学习单独的模型。此外,我们表明该方法能够使用其表面的单个凝视图像呈现以前看不见的纹理。
Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The result of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need for learning a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface.