Rigger
Reinhart
Jonah
jonahrnhrt@gmail.com
Surface Based Facial Rigging
Application
Surface based rigging is a technique that involves setting up nurbs surfaces that match the shape of the mesh and driving joints along the surfaces UV coordinates using various methods. This technique allows the rigger to move vertices across the surface of the mesh thus simulating the movement of flesh on an underlying skeleton. It also helps to maintain a more consistent overall volume. I am going to cover three main methods that can be used the drive the UV coordinates of the joints on the surface, as well as some related techniques.
Tips for Surface Creation
Before setting up any joints on the surface you will of course need to set up the surface. The U coordinates of the surface should not be an issue since the U tangents of the surface always run horizontally. The V tangents of the surface are a little harder to get right but here are some tips.
For the best results create curves in the XZ plane that match the curvature of the mesh. Create a curve at the maximum Y position for the desired surface and the minimum Y position as well. You will also need to add several curves between the two in order to guide the shape of the surface. As with most rigging you will want to give the animator as wide a range of motion as possible, so the Y range may be wider than you expect. On some parts of the face the ends of the surface may not look the way you expect, such as the surface created for the eyebrows (seen below). The upper half of the surface is as you'd expect but the lower half half falls straight down in front of the eyes, this is to keep the brows from intersecting the eyes when they are brought all the way down. The curves you create should then be rebuilt to have evenly spaced CV's so that the vertical lines of the UV grid move straight up and down. Then loft the curves into a surface. You can now tweak the curves or go back and add more in order to get the desired shape to the surface. The final step is to rebuild the surface to have uniform spans and a parameter range of 0 to 1.
The wrap deformer may seem like a good tool to use but it will create a surface with parallel vertical spans. Once you have created the surface you need to attatch joints to it and orient them. There are to ways to do this, using a follicle, or using a pointOnSurfaceInfo node and an aimConstraint
Connecting the Joint to the Surface
A key part of the approach is that there are two ways to orient a joint on the surface. Firstly you can connect the joint to a a follicle. This will orient along the U and V tangents of the surface and normals of the surface. Alternatively you can orient the joint along the normals of the surface and use a second vector (like the the tangent vector of a curve) as the other orientation axis. This second option is a bit more complicated. The first method is simple because you just need to plug in a UV position to get the translation and rotation of the joint. The second method is more complicated and will be discussed below.
Curve and Point Projection (Driver Method 1)
The first driver method allows the user to "project" points and curves onto the nurbs surface. It requires two surfaces , the surface that matches the shape of the mesh (which I will call the skull surface)and a flattened version of that surface (which I will call the flat surface). As an example I will be showing off this technique on an eyebrow.
Above is an example of the two surfaces that this method requires. In the video below I will show how to project an individual point onto the "skull surface". The information presented in the video is also presented in text below (The video version is currently in progress).
First we need to get the world space position we are "projecting" from. This position can come from anywhere but I either use the world position of a locator or a point on a nurbs curve. With a locator the world position is easily accessible as an attribute on the locators shape node. The position of a point on a curve comes from a pointOnCurveInfo node. The position (from the locator or the pointOnSurfaceInfo node) is plugged into a closestPointOnSurface node whose input surface is the flat surface. Because the surface is flattened in the z-axis this effectively projects the input position along the z-axis to a point on the surface that has the same X and Y position. This node will return both an XYZ position in world space and a set of UV coordinates.
At this point you need to decide how you want the joint to be oriented on the surface. The joint will be oriented normal to the surface, but its second axis of orientation can vary. If you want it to be oriented along the surfaces U or V tangents then simply plug the UV coordinates from the closestPointOnSurface node into the UV inputs of a follicle and plug its rotate and translate outputs into a joint. Be sure to make the skull surface the input surface for the follicle. Below in an example of the required node network. If you also wanted to rotate the joint on the surface you would simply need to plug the rotate and translate into a buffer group above the joint and then rotate the joint itself, or use a plusMinusAverage node to add rotations to the follicles output rotations before plugging them into the joint. I would strongly recommend the buffer group method over the node method.
Alternatively you could orient the joint along the tangent of a curve or at another object. Below I will be showing you how to use the tangent of a curve as the second axis of orientation. This method of course only works if the user is projecting a point from a curve onto the surface.
We take the UV coordinates of the closestPointOnSurface node and plug them into a pointOnSurfaceInfo node whose input surface is the skull surface. Because the two surfaces have UV’s which are aligned along he z-axis, the pointOnSurfaceInfo node returns a position on the surface that shares the same X and Y position as the original point on the curve. To get the two orientation axis we need to use an aim constraint, the target translate comes from the sum of the normal vector and the world space position of the point on the surface. The sum represented a point that starts on the surface and is moved out along the normal (this is of course the point we want the joint to aim at). The tangent of the original curve comes from the pointOnCurveInfo node and is used as the world up vector of the aim constraint. There are a few other required connections on the aim constraint but they can be seen in the above image.
Notes: The curve can be deformed in a variety of ways, joints, non-linear deformers, blendshapes, etc. There is a difference between using a motion path to get the position on a curve and using a pointOnCurveInfoNode. When they are in standard mode they work the same but when using percent mode, a motion path will return a point at a certain percent along the curves length, while a pointOnCurveInfo node will return a point whose parameter value (U) is a given percent of the maximum parameter value for the curve.
Set Driven Keys (Driver Method 2)
The curve projection method does not work when the surface bend more than 90 degrees away from the axis of projection (which is to say it can't project onto a section of the surface it can't see when looking straight at the character, like the sides of the head). An easy solution is to drive the UV coordinates of the joint using set-driven keys. This method does not require a second surface. SDK's can be used to drive multiple joints with a single control to create a deformations similar to blendshape but which move vertices in a non-linear path. This technique was used by Stephan Candell on "Cloudy With a Chance of Meatballs" and is part of what keeps the characters facial silhouettes so consistent (a link to his demo reel and other research sources are at the bottom of this page).
You could also create a slightly more complicated node network to simulate parent child hierarchy of joints on the surface. You can do this by adding the UV translations of the "parent" control to those of the the "children". Parent-child rotations cannot be simulated this way. They can be achieved using the third driver method.
The pseudo-hierarchy can be improved by having the controls move along the surface as well. To achieve this result you need to follow these steps.
-
Take the translations of the control and put them through a multiplyDivide node so that they were all multiplied by negative one.
-
Plug the output from this node into the translations of a group above the control so that the translation attributes will change but the control will remain in place.
-
Create another group above that one and connect the output translate and rotate from the follicle you are driving to that group.
So effectively the control is only inheriting the transformations of the follicle, and thus being moved along the surface, leading to a much more intuitive visual design.
Unwrapping Surfaces (Driver Method 3)
The most versatile method, in my experience, involves unwrapping the skull surface to create a flat surface with evenly-spaced rectangular spans. This is actually fairly easy, you just need to create a nurbs plane whose length and width matches those of the original surface. Most of the time eyeballing it is sufficient, but you could measure the length of the hulls of the skull surface to be more precise.
Points are driven on this flattened surface with SDK's curves or any other method and then remapped from the flat surface to the skull surface using UV's. This method allows easy parent child relationships between drivers, and it allows for joints to move onto any part of the face without the 90 degree limit of projection.
What Comes Next?
The next stage in improving this technique is to have NURBS surfaces drive the positions of individual vertices rather than individual joint influences. This is not possible in out-of-the-box Maya and will require delving into the Maya python API.
Further Reading
Figuring out this rigging technique required a lot of research and experimentation, and it can still be improved. Below is a list of the sources and inspiration I used in my research, if you have any questions or make any improvements to the system feel free to contact me.
"A Hybrid Approach to Facial Rigging" a paper published collaboratively by Sony Pictures and Disney
David Komorowski's Demo Reel (one of the authors of the above paper)
Landon Graham's demo reel and blog. Landon was an intern at Pixar
Stephen Candell (a character rigger at Sony Pictures Imageworks)
"It's a UVN face rig Charlier Brown" a paper published by Blue Sky Studios.
Parametric Facial Rigging Experiment - Cedric Bazillou