First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
Framestore goes to Where the Wild Things Are
- — 12 January, 2010 14:23
Wild at heart
Framestore’s post-production process began with tracking the heads of the seven Wild Things – Carol, KW, Douglas, Ira, Judith, Alexander and The Bull – in 3D. The team then created CG versions of each of their faces. The CG heads included both the geometry of the head, and high-res textures extracted from them to ensure they matched the original model precisely. The faces were hand-animated to match the performances; the geometry from this was then used to warp the original footage to make the mouths, eyes and other facial features talk and express.
This technique is known as projection mapping, and is often used for making animals speak in films such as Babe. It’s popular for showing small animated motions that are easy to hand-animate, but it often seems to have been superseded by full CG creatures with motion-captured movements. Motion-capture is popular with studios, as it gives a faster workflow, and what some believe to be more realistic results.
For Where the Wild Things Are though, projection mapping and painstaking hand-animation gave it a crafted look drawn straight from the talents of Framestore’s animators, rather than using software to replicate the movements of an actor. It also allowed them to bring emotion to a facial structure that doesn’t exactly match that of a human.
However, projection mapping can’t add information that’s not on the frame, so extra CG elements such as the insides of mouth had to be added.
Framestore took the original footage (left) and tracked the model in 3D space to create a CG model of Alexander’s face (right). This was animated and used to warp his face to create the final shot (below).
The team based the facial animations on both the voice and on-set actors’ performances, as animator supervisor Michael Eames explains.
“Spike put together video selects of the original voice recordings,” he says. “We called them spotting sessions. We would run them in parallel to a sequence and go through what worked, what didn’t, and what the intention of each performance was.”
Eames likens this process to a director talking through a scene with an actor.
“These were referenced both by the suit performers on-set, and later ourselves back in the studio,” he continues. “It was important that the performance worked in the body as well as the facial element. Ultimately, the facial performance was developed and animated by hand.”