Deep neural network generates realistic character-scene interactions

A key part of bringing 3D animated characters to life is the ability to depict their physical motions naturally in any scene or environment.

Animating characters to naturally interact with objects and the environment requires synthesizing different types of movements in a complex manner, and such motions can greatly differ not only in their postures, but also in their duration, contact patterns, and possible transitions. To date, most machine learning-based methods for user-friendly character motion control have been limited to simpler actions or single motions, like commanding an animated character to move from one point to the next.

Computer scientists from the University of Edinburgh and Adobe Research, the company’s team of research scientists and engineers shaping early-stage ideas into innovative technologies, have developed a novel, data-driven technique that uses deep neural networks to precisely guide animated characters by inferring a variety of motions–sitting in chairs, picking up objects, running, side-stepping, climbing through obstacles and through doorways–and achieves this in a user-friendly way with simple control commands.

The researchers will demonstrate their work, Neural State Machine for Character-Scene Interactions, at ACM SIGGRAPH Asia, held Nov. 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies.

To animate character-scene interactions with objects and the environment, there are two main aspects–planning and adaptation–to consider, say the researchers. First, in order to complete a given task, such as sitting in chairs or picking up objects, the character needs to plan and transition through a set of different movements. For example, this can include starting to walk, slowing down, turning around while accurately placing feet and interacting with the object, before finally continuing to another action. Second, the character needs to naturally adapt the motion to variations in shape and size of objects, and avoid obstacles along its path.

“Achieving this in production-ready quality is not straightforward and very time-consuming. Our Neural State Machine instead learns the motion and required state transitions directly from the scene geometry and a given goal action,” says Sebastian Starke, senior author of the research and a PhD student at the University of Edinburgh in Taku Komura’s lab. “Along with that, our method is able to produce multiple different types of motions and actions in high quality from a single network.”

Using motion capture data, the researchers’ framework learns how to most naturally transition the character from one movement to the next -for example being able to step over an obstacle blocking a doorway, and then stepping through the doorway, or picking up a box and then carrying that box to set on a nearby table or desk.

See a video of the technique here

.

The technique infers the character’s next pose in the scene based on its previous pose and scene geometry. Another key component of the researchers’ framework is that it enables users to interactively control and navigate the character from simple control commands. Additionally, it is not required to keep all the original data captured, which instead gets heavily compressed by the network while maintaining the important content of the animations.

“The technique essentially mimics how a human intuitively moves through a scene or environment and how it interacts with objects, realistically and precisely,” says Komura, coauthor and chair of computer graphics at the University of Edinburgh.

Down the road, the researchers intend to work on other related problems in data-driven character animation, including motions where multiple actions can occur simultaneously, or animating close-character interactions between two humans or even crowds.

###

Along with Sebastian Starke and Taku Komura, the researchers behind Neural State Machine for Character-Scene Interactions include He Zhang (University of Edinburgh) and Jun Saito (Adobe Research-USA). For the paper and video, visit the team’s project page.

This part of information is sourced from https://www.eurekalert.org/pub_releases/2019-10/afcm-dnn102919.php

Ilka Gobius
65-976-98370
[email protected]
http://www.acm.org 

withyou android app

Leave a Reply

Your email address will not be published.