Or maybe you've decided that a performance should change more gradually in a given take. All of that is possible with new software from Disney Research and the University of Surrey called FaceDirector, which is aimed at letting you choose which facial performance you'd like in a given scene from multiple takes:


For those crying foul about manipulating faces or the integrity of the scene, these things are already being done all the time. Another common technique is actually physically removing the entire actor rather than just changing the face (which requires more work). This is something David Fincher likes to do quite a bit, especially when the camera move is particularly difficult or he wants to use different takes of a wide shot at the same time.

It's worth noting if you didn't realize already that this is still in the research phase, so don't expect to be able to purchase a copy of FaceDirector tomorrow. Here's more on their technique, which avoids the 3D reconstruction that's typically employed today:

We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states. As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production.

Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.

There are tons of potential uses for this software, and while it did seem to work when the camera was moving, I expect the best looking result to come from a scene where the actor is in a similar spot with similar lighting. Though this could be considered part of the "fix it in post" method, it's very possible that the two great reactions you'd like to use are just in different takes. To me, that's not using post as a crutch, but as a way to utilize every piece of material you shot on set. When there's no money to go back and reshoot, if tools like these can be made affordable, they could be a huge help. 

This also isn't Disney Research's first rodeo with this sort of thing, they've been developing all sorts of amazing technologies, including automatic redubbing and new techniques for creating HDR video:

For more on FaceDirector, check out the in-depth paper here.

Source: Disney Research