Multiresolution Motion Analysis and Synthesis


Overview

Recently, capturing live motion has become one of the most promising technologies in character animation because the naturalness of live performers can be easily duplicated in a virtual world. Much of the recent research in computer animation has been devoted to developing various kinds of editing tools to produce convincing animation from canned motion clips. Although recent progress in motion capture technology has made it relatively easy to obtain high-quality motion, reusing the data is hard because the motion was acquired for a specific performer, within a specific environment, and in a specific context.

Multiresolution analysis has generated great research interests as a unified framework to facilitate a variety of motion editing tasks. Although well established methods exists for multiresolution analysis in vector spaces, the majority of these methods do not easily generalize in a uniform way for manipulating motion data that contain orientations as well as positions. A straightforward generalization may result in different representation in different coordinate systems. We suggested a new method for multiresolution motion analysis that guarantees coordinate-invariance.

Coordinate-Invariance

Suppose that, for example, two identical motion clips are placed at different positions in a reference frame, and the same operation is applied to modify them. Any motion editing operation based on our multiresolution analysis method guarantees the same results independently of their positions. It is practically very important. Many animation systems use different coordinate systems. Some systems have y-axis up and some have z-axis up. They define the local coordinate system at each joint of articulated figures differently. Coordinate-invariant operations can be used for any system to produce consistent results.

Convolution Filters for Orientation Data

One of the key components of multiresolution analysis is a convolution filter that must be applied to position and orientation data uniformly. Given a filter mask, convolution filtering is to sum the products between mask coefficients and data points under the mask at a specific position on the signal. A variety of methods have been explored to compute a weighted sum of 3D orientations. However, many of these suffered from lack of important filter properties.

Our approach to this problem is to transform the orientation data into their analogue in a vector space, apply a filter mask to them, and then transform the results back to the orientation space. This scheme gives time-domain filters for orientation data that are computationally efficient and satisfy such important properties as coordinate-invariance, time-invariance, and symmetry.

Transformation between linear and angular signals


Video


Publications


 

Personnel


[Last modified : Feb 11, 2003]