Inverse kinematics using dynamic joint parameters: inverse kinematics animation synthesis learnt from sub-divided motion micro-segments


In this paper, we describe a novel parallelizable method for the fast computation of inverse kinematics (IK) animation. The existing IK techniques are based on sequential algorithms as they compute a skeletal pose relying only on the previous one; However, for a given trajectory, both the previous posture and the following posture are desired to compute a natural posture of the current frame. Moreover, they do not take into account that the skeletal joint limits vary with temporal spatial skeleton configurations. In this paper, we describe a novel extension of IK model using dynamic joint parameters to overcome the major drawbacks of traditional approaches of IK. Our constraint model relies on motion capture data of human motion. The dynamic joint motion parameters are learned automatically, embedding dynamic joint limit values and feasible poses. The joint information is stored in an octree which clusters and provides fast access to the data. Where the trajectory of the end-effector is provided in the input or the target positions data are sent by data stream, all the computed poses are assembled into a smooth animation sequence using parallel filtering and retargeting passes. The main benefits of our approach are dual: first, the joint constraints are dynamic (as in a real human body), and automatically learnt from real data; second, the processing time is reduced significantly due to the parallel algorithm. After describing our model, we demonstrate its efficiency and robustness, and show that it can generate high visual quality motion and natural looking postures with a significant performance improvement.

The Visual Computer