Physically Based Synthesis of Animatable Face Models


We propose a physically based method to automatize the synthesis process of talking heads able to perform animation encoded in a MPEG-4 FBA stream. Starting from an input triangle mesh, we build the corresponding anatomical face model using techniques already known in literature. The novelty of our method is in using these techniques with the MPEG-4 FBA specification. The components of the anatomical model are a multi-layered soft tissue representing the facial skin, a muscle map and the underlying bony structure, including a movable jaw. Given a MPEG-4 Facial Animation Parameter expressing a basic action of the face, we use the facial model to perform this action by contracting the proper muscle group and, eventually, moving the jaw. The resulting deformed face is called morph target (MT). The whole set of morph targets is an Animatable Face Model (AFM) and we employ it to produce realistic facial animation at interactive rate.

ACM Symposium on Virtual Reality Software and Technology (VRIPHYS)