论文标题

强大的互动面部动画编辑系统

A Robust Interactive Facial Animation Editing System

论文作者

Berson, Eloïse, Soladié, Catherine, Barrielle, Vincent, Stoiber, Nicolas

论文摘要

在过去的几年中,虚拟角色的自动生成面部动画引起了动画研究和行业社区的兴趣。最近的研究贡献利用了机器学习方法,可以通过音频和/或视频信号来生成合理的面部动画。但是,这些方法无法解决动画版的问题,这意味着需要纠正不令人满意的基线动画或修改动画内容本身。在面部动画管道中,编辑现有动画的过程与产生基线一样重要且耗时。在这项工作中,我们提出了一种新的基于学习的方法,以轻松从一组直觉的控制参数中编辑面部动画。为了应对面部运动中的高频组件并保留动画中的时间相干性,我们使用分辨率提供完全卷积神经网络,该网络将控制参数映射到融合形状系数序列。我们在回归器之后堆叠一个额外的分辨率保护动画自动编码器,以确保系统输出自然的动画。所提出的系统很健壮,可以处理非专业用户的粗略,夸张的编辑。它还保留了面部动画的高频运动。

Over the past few years, the automatic generation of facial animation for virtual characters has garnered interest among the animation research and industry communities. Recent research contributions leverage machine-learning approaches to enable impressive capabilities at generating plausible facial animation from audio and/or video signals. However, these approaches do not address the problem of animation edition, meaning the need for correcting an unsatisfactory baseline animation or modifying the animation content itself. In facial animation pipelines, the process of editing an existing animation is just as important and time-consuming as producing a baseline. In this work, we propose a new learning-based approach to easily edit a facial animation from a set of intuitive control parameters. To cope with high-frequency components in facial movements and preserve a temporal coherency in the animation, we use a resolution-preserving fully convolutional neural network that maps control parameters to blendshapes coefficients sequences. We stack an additional resolution-preserving animation autoencoder after the regressor to ensure that the system outputs natural-looking animation. The proposed system is robust and can handle coarse, exaggerated edits from non-specialist users. It also retains the high-frequency motion of the facial animation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源