Can Emotion Be Digitized? An Intriguing Look at the Future of Motion Capture & VFX
Motion capture has come a long way since Snow White's rotoscoped seven dwarves and a-ha's "Take on Me" video.
Though rotoscoping is an incredible technique in and of itself (I was pretty amazed when I first saw A Scanner Darkly), but it was just an early precursor to the motion capture technology we know today. This video by The Creators Project takes an interesting look at where we currently find ourselves in this new 3D VFX landscape, from creating holograms to the amazing process of scanning entire humans to make digital doubles.
Creating a photorealistic human being to use as a character in a film is no small task. Not only is the process complex and time-consuming, but it must be done effectively enough to avoid the uncanny valley. And that's probably the biggest hurdle VFX artists have to jump over if they want to digitize humans for film -- making them look real enough, because if they don't look real enough, you have some seriously unsettling results on your hands. This video quickly explains why human replicas, whether they're robotic or digital, creep people out:
I'm sure most of us remember the Digital Emily Project. USC researcher Paul Debevec and his team created a CG model of actress Emily O'Brien's face, one that looked, at least back in 2009, incredibly believable and realistic (I had no idea it was computer generated until the end of the video), but Debevec has said that one of the major issues with the technology was that each frame of the video took about 30 minutes to render.
Debevec's newest project, "Digital Ira", showcased in the Creators Project video, only takes a single second to process 30 frames, which means that higher render speeds are making it easier, faster, as well as less expensive and cumbersome for VFX artists to create lifelike models of the human face.
So, where are we headed in motion capture technology? Avatar and Rise of the Planet of the Apes has shown the potential to use digital characters that look realistic (though those films avoid the uncanny valley by avoiding making these digital characters fully human). As time goes on, motion capture and VFX technology is only going to get better, and eventually we may not be able to tell which performances in our future favorite films are done by a real life human or their digital replica.
Taking it a step further, will we even need human actors in the future? Will we be able to create our own -- sculpting the perfect look, manipulating perfect performances, generating perfect moments? Can we ever fully replicate a human performance to use in film if so much of it relies on emotion? Essentially, can emotion be digitized? This is a question Debevec addresses:
That particular sort of fully-autonomous digital actor hasn't been developed quite yet. Every digital character that you've seen in a movie so far or a video game, if you are believing the performance and reading some real emotion out of it -- it's because there was a real actor that actually gave that performance. Maybe it really is, you know, that you are a set of atoms that exist on this Earth with needs and an ability to produce and give and contribute in some way and that that's gonna distinguish us from any kind of computer algorithm. Maybe we'll have to answer difficult questions, 'cause if a computer algorithm can do that at some point or if we embody it within a robot then, you know --
Source: The Creators Project