Multiresolution Motion Analysis and Synthesis
Overview
Recently, capturing live motion has become one of the most promising technologies in character animation because the naturalness of live performers can be easily duplicated in a virtual world. Much of the recent research in computer animation has been devoted to developing various kinds of editing tools to produce convincing animation from canned motion clips. Although recent progress in motion capture technology has made it relatively easy to obtain high-quality motion, reusing the data is hard because the motion was acquired for a specific performer, within a specific environment, and in a specific context.
Multiresolution analysis has generated great research interests as a unified framework to facilitate a variety of motion editing tasks. Although well established methods exists for multiresolution analysis in vector spaces, the majority of these methods do not easily generalize in a uniform way for manipulating motion data that contain orientations as well as positions. A straightforward generalization may result in different representation in different coordinate systems. We suggested a new method for multiresolution motion analysis that guarantees coordinate-invariance.
Coordinate-Invariance
Suppose that, for example, two identical motion clips are placed at different positions in a reference frame, and the same operation is applied to modify them. Any motion editing operation based on our multiresolution analysis method guarantees the same results independently of their positions. It is practically very important. Many animation systems use different coordinate systems. Some systems have y-axis up and some have z-axis up. They define the local coordinate system at each joint of articulated figures differently. Coordinate-invariant operations can be used for any system to produce consistent results.
Convolution Filters for Orientation Data
One of the key components of multiresolution analysis is a convolution filter that must be applied to position and orientation data uniformly. Given a filter mask, convolution filtering is to sum the products between mask coefficients and data points under the mask at a specific position on the signal. A variety of methods have been explored to compute a weighted sum of 3D orientations. However, many of these suffered from lack of important filter properties.
Our approach to this problem is to transform the orientation data into their analogue in a vector space, apply a filter mask to them, and then transform the results back to the orientation space. This scheme gives time-domain filters for orientation data that are computationally efficient and satisfy such important properties as coordinate-invariance, time-invariance, and symmetry.
Transformation between linear and angular signals
Video
Motion LOD
Enhancement and attenuation
Jump and kick: jump-kick, enhanced, attenuated
Hit on face: face-hit, enhanced, attenuated
Analogy
Blend walk.mov, walk-turn.mov, and limp.mov to produce limp-turn.mov
Blend walk.mov, walk-turn.mov, and strut.mov to produce strut-turn.mov
Blend walk.mov, run.mov, and strut.mov to produce e_run_trut.mov
Transition
Splice stubtoe.mov to limp_s.mov to produce stitched.mov
Resequence
Publications
Jehee Lee and Sung Yong Shin, A Coordinate-Invariant Approach to Multiresolution Motion Analysis,
Graphical Models (Formerly GMIP), volume 63, number 2, 87-105, 2001.
PDF (1.78M) / Power
Point Presentation (1.89M)
Jehee Lee and Sung Yong
Shin, General Construction of Time-Domain Filters for Orientation Data,
IEEE Transactions on Visualization and Computer Graphics,
volume 8, number 2, 119-128, 2002.
PDF (214K)
Personnel
Jehee Lee (Seoul National University)
Sung Yong Shin (KAIST)
[Last modified : Feb 11, 2003]