Motion Patches
Building Blocks for Virtual Environments Annotated with Motion Data

Real-time animation of human figures in virtual environments is an important problem in the context of computer games and virtual environments. Recently, the use of large collections of captured motion data has added increased realism in character animation. However, assuming that the virtual environment is large and complex, the effort of capturing motion data in a physical environment and adapting them to an extended virtual environment is the bottleneck for achieving interactive character animation and control. We present a new technique for allowing our animated characters to navigate through a large virtual environment, which is constructed using a set of building blocks. The building blocks can be arbitrarily assembled to create novel environments. We annotate each block with a motion patch, which informs what motions are available for animated characters within the block. The versatility and flexibility of our approach are demonstrated through examples in which multiple characters are animated and controlled at interactive rates in large, complex virtual environments.

Download (UPDATED: MAY 15, 2006)

Video(MOV;QuickTime MPEG4, 68.5MB)

Publications (UPDATED: MAY 15, 2006)

Kang Hoon Lee, Myung Geol Choi and Jehee Lee, Motion Patches: Building Blocks for Virtual Environments Annotated with Motion Data, ACM SIGGRAPH 2006.



We created a simple interactive office environment. The user can create, move and remove desk-and-chair block using intuitive interfaces. Also the user can make animated characters to walk around, sit on one of the chair, and perform specific behaviors interactively. There are eight motion patches for this example, except for the tilable walk patch. Three patches (sit down/stand up, work at the desk, and chat at the desk) are embedded in the desk-and-chair block. Five patches (idle, chat, dispute, presentation, and walk-to-stop) are embedded in the square ground panel.


Jungle Jym

We recorded motions of about 28 minutes duration from the playground environment (top right on the right figure). In the recorded data, our subject walked, climbed, and slid repeatedly in the playground. Our system identified eleven motion patches automatically from the motion data and environment information.

Once motion patches have been identified, we can transfer the original motion to arbitrarily shaped target environments by fitting motion patches on all the similar sub-parts. 2063 instances of motion patches are fitted in the large jungle jim (right figure). They are stitched to produce a strongly connected motion graph with 227,583 nodes and 1,474,971 edges. When the user commands animated characters to go to specific locations, they find shortest paths on the graph and follow them.




We created one thousand of animated characters on a grid of walk patches, shaped like SIGGRAPH logo. Collisions between characters are avoided approximately at the resolution of building blocks.
Our system required about 66 seconds to create 300 frames (10 seconds) of video images. Actually, rendering dominated the computation time. Our system required only 2.8 seconds to create the same animation with video disabled. It means that the motion of one thousand characters is computed and controlled at a rate of more than 100 frames per second.



This page is maintained by Kang Hoon Lee, and last updated at May 15, 2006.