Learning Predict-and-Simulate Policies From Unorganized Human Motion Data

Seoul National University
Responsive image

Our predict-and-simulate policy creates an agile, interactively-controllable, physically-simulated character equipped with various motor skills learnedfrom unorganized motion data.


Abstract

The goal of this research is to create physically simulated biped characters equipped with a rich repertoire of motor skills. The user can control the characters interactively by modulating their control objectives. The characters can interact physically with each other and with the environment. We present a novel network-based algorithm that learns control policies from unorganized, minimally-labeled human motion data. The network architecture for interactive character animation incorporates an RNN-based motion generator into a DRL-based controller for physics simulation and control. The motion generator guides forward dynamics simulation by feeding a sequence of future motion frames to track. The rich future prediction facilitates policy learning from large training data sets. We will demonstrate the effectiveness of our approach with biped characters that learn a variety of dynamic motor skills from large, unorganized data and react to unexpected perturbation beyond the scope of the training data.

Publication

Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee. 2019.
Learning Predict-and-Simulate Policies From Unorganized Human Motion Data
ACM Trans. Graph. 38, 6, (SIGGRAPH Asia 2019)

Video


Code

Code will be available soon in GitHub.

Bibtex

@article{Park:2019,
    author = {Park, Soohwan and Ryu, Hoseok and Lee, Seyoung and Lee, Sunmin and Lee, Jehee},
    title = {Learning Predict-and-Simulate Policies From Unorganized Human Motion Data},
    journal = {ACM Trans. Graph.},
    volume = {38},
    number = {6},
    year = {2019},
    articleno = {205}
}