Generative AI for Animating 3D Human Face and Body Behaviors

bg-new
About the resource/s

Uploaded by AUTH

Human behavior, both for face (e.g., expressions) and body (e.g., actions) has been studied in detail (for example for expression / action classification and prediction), but there have been few works exploring generation of novel behaviors. Generating novel sequences of human facial expression, talking heads, or body to form a natural and plausible action with continuous and smooth temporal dynamics is a challenging problem. These motions can either simulate full body movement, like for gait, or part specific movement, like in playing the guitar or phone call, or involve facial expressions, action units or even mouth movement when a person speaks or reads a text. With the advent of powerful generative models such as GANs or Diffusion models, novel data generation paradigms have become possible, and these networks have shown to be powerful in many image generation tasks. However, many issues remain to be solved especially when passing from the static to the dynamic case and new research problems emerge.

We expect generating synthetic and realistic static and dynamic data of humans can have a big impact in several different contexts. A straightforward outcome that developing such techniques could have, is that of generating an abundance and variety of new data that could be otherwise difficult, very expensive and time consuming to obtain from reality. Such data can be essential in simulation, virtual and augmented reality, in training more robust learning tools, to cite a few. For example, we could expect new applications in the game and movie industry, where fully synthetic actors could be used in the near future, without the need of explicit modeling.

In this talk, we will address some recent works in this domain for generating facial expressions, talking heads and body animation of 3D human avatars.

Documents
Other Sources