Can Emotion Be Digitized? An Intriguing Look at the Future of Motion Capture & VFX

Motion capture has come a long way since Snow White's rotoscoped seven dwarves and a-ha's "Take on Me" video.

Though rotoscoping is an incredible technique in and of itself (I was pretty amazed when I first saw A Scanner Darkly), but it was just an early precursor to the motion capture technology we know today. This video by The Creators Project takes an interesting look at where we currently find ourselves in this new 3D VFX landscape, from creating holograms to the amazing process of scanning entire humans to make digital doubles.

Creating a photorealistic human being to use as a character in a film is no small task. Not only is the process complex and time-consuming, but it must be done effectively enough to avoid the uncanny valley. And that's probably the biggest hurdle VFX artists have to jump over if they want to digitize humans for film -- making them look real enough, because if they don't look real enough, you have some seriously unsettling results on your hands. This video quickly explains why human replicas, whether they're robotic or digital, creep people out:

I'm sure most of us remember the Digital Emily Project. USC researcher Paul Debevec and his team created a CG model of actress Emily O'Brien's face, one that looked, at least back in 2009, incredibly believable and realistic (I had no idea it was computer generated until the end of the video), but Debevec has said that one of the major issues with the technology was that each frame of the video took about 30 minutes to render.

Debevec's newest project, "Digital Ira", showcased in the Creators Project video, only takes a single second to process 30 frames, which means that higher render speeds are making it easier, faster, as well as less expensive and cumbersome for VFX artists to create lifelike models of the human face. 

So, where are we headed in motion capture technology? Avatar and Rise of the Planet of the Apes has shown the potential to use digital characters that look realistic (though those films avoid the uncanny valley by avoiding making these digital characters fully human). As time goes on, motion capture and VFX technology is only going to get better, and eventually we may not be able to tell which performances in our future favorite films are done by a real life human or their digital replica.

Taking it a step further, will we even need human actors in the future? Will we be able to create our own -- sculpting the perfect look, manipulating perfect performances, generating perfect moments? Can we ever fully replicate a human performance to use in film if so much of it relies on emotion? Essentially, can emotion be digitized? This is a question Debevec addresses:

That particular sort of fully-autonomous digital actor hasn't been developed quite yet. Every digital character that you've seen in a movie so far or a video game, if you are believing the performance and reading some real emotion out of it -- it's because there was a real actor that actually gave that performance. Maybe it really is, you know, that you are a set of atoms that exist on this Earth with needs and an ability to produce and give and contribute in some way and that that's gonna distinguish us from any kind of computer algorithm. Maybe we'll have to answer difficult questions, 'cause if a computer algorithm can do that at some point or if we embody it within a robot then, you know --     

You Might Also Like

Your Comment

12 Comments

Rendering has gotten faster thanks to GPUs but what this article doesn't take into account is the huge amount of work that goes into cleaning up and refining motion capture data. Andy Serkis' Oscar was controversial because once the raw data has been worked on painstakingly by hand by a team of artists, very little of the original performance is likely to remain. Serkis may have deserved an Oscar for his performance capture work but the CGI artists that brought those performances to the screen are at least equally deserving of recognition.

Performance capture is getting better all the time but it's still not up to the task of capturing subtle emotional nuance - and even when it finally can, layers of animation work will be added to refine the final look. We'll see fully photorealistic CGI actors in lead roles one day - mostly Hollywood legends that have passed away I suspect (at first, anyway) but it'll be some time before it's cheaper to use a digital Tom Cruise than the real one.

June 3, 2015 at 3:46PM

7
Reply
avatar
Survivor Films
Writer/Director
147

I noticed that among the giants of motion graphics animation eg Maya, is a seemingly little known and much cheaper software called Messiah Studio, used by a small hardcore community. Here is an example of it's facial animation capability (done by hand, ie NOT motion captured):-
https://www.youtube.com/watch?v=kVW16FmHnak
More emotions, ie fear/panic:-
https://www.youtube.com/watch?v=46On_vdXGsQ
The loyal users frequently lament the lack of documentation and tutorials, in addition to perceived marketing failures of the parent company, hence Messiah is regarded as a quiet little secret in the industry. The situation is likened to a flagship being left to rust, yet still capable of great magic.
http://www.projectmessiah.com/x6/community.html
It would be great if NFS or anyone else could offer any more insight on this or other solutions, especially since facial animation choices seem to have receded since the death of Softimage.

June 3, 2015 at 4:36PM

10
Reply
Saied M.
1407

Messiah Studio isn't a secret and the cost of a piece of software isn't really a factor anymore (they're all <5K now as opposed to >50K not that long ago).
Blender if free but no one uses it...
animation software is just a cog in a very large, often very complicated wheel we call a 'pipeline'.
and many studios write their own, for various reasons...

June 3, 2015 at 5:18PM

0
Reply
Michael Goldfarb
Senior Technical Director - Side Effects
344

Actually, Blender has got a big pool of artists who use it. It is just big studios who don't pick it up because it doesn't follow the traditional workflow conventions. But things will change soon. Blender has a lot of features, it just needs refinement from pros from the industry.

June 4, 2015 at 9:12PM

0
Reply
avatar
Einar Gabbassoff
D&CD at Frame One Studio
1347

Double post again....

June 4, 2015 at 9:12PM, Edited June 4, 9:13PM

0
Reply
avatar
Einar Gabbassoff
D&CD at Frame One Studio
1347

"traditional workflow conventions"
yeah, this is called a pipeline - and blender just doesn't fit....it would need to be re-written. That said it is pretty good, but I wouldn't recommend it for those who are looking to get into VFX...or for anyone for that matter...you can get Houdini Indie (full featured) for $99 and use it commercially for any project that doesn't exceed $100K in gross revenue...

June 10, 2015 at 4:32PM

0
Reply
Michael Goldfarb
Senior Technical Director - Side Effects
344

As far as I'm aware (and it's not my field so I could be wrong), there aren't any dedicated facial animation packages any more (I believe Softimage's FaceRobot got Borged into Autodesk's vast technology pool) - facial animation is a joint effort between character modellers, riggers and animators - each generally specialists in their field.

The modellers will create a head with the right topology for the necessary deformations and if necessary, specific blend shapes (morphs) to hit key poses, mouth shapes etc. The riggers create weight maps to control how specific regions of the mesh deform as well as the underlying skeletal structure the motion capture data is retargeted to and lastly the controls the animators will use. The animators then take the mesh, the rig and the mocap data and animate over the top building up layers of believable nuance and personality whilst also fixing anything that's broken.

All that's a gross simplification of course, which doesn't even take into account the cleanup of the mocap data which happens before it gets remapped to the mesh.

June 3, 2015 at 4:53PM

5
Reply
avatar
Survivor Films
Writer/Director
147

and lets not forget textures and shaders...

June 3, 2015 at 5:19PM

9
Reply
Michael Goldfarb
Senior Technical Director - Side Effects
344

I think this looks amazing. Kind of reminds me that I need to play L.A. Noir. I thought it looked spot on as far as looking real, but there was a slight floatiness or lag in the face/eye movement. Hmmm...

Can we get a CGI Rick Moranis out of retirement???

June 3, 2015 at 5:10PM

0
Reply
avatar
Donovan Vim Crony
Director, DP, Editor, VFX, Sci-Fi Lover
509

Its one of those technologies that while amazing for entertainment, can be catastrophic in the wrong hands. Any book or movie that deals in the material illustrates how dangerous it can become. When you can no longer trust what's real, especially when it comes to leadership or relationships, the pandora's box is open. People can be manipulated so easily with edited video and sound bytes now. Once people can create whatever they want without any source material, think of the consequences? Its scary...and this is MY field=)

In the video, the guy recreating his own child. Why exactly? Whats to gain?

June 4, 2015 at 5:24AM

15
Reply
avatar
Josh.R
Motion Designer/Predator
1177

Eyes are windows to the soul. Many of the monsters in film, vampires, zombies, etc are scary because they lose that part of a person. I think the discomfort of realistic replication is the absence we feel that is part of a connection to a real person.

June 7, 2015 at 6:58PM

9
Reply
Ryan Gudmunson
Recreational Filmmaker
971

The next use I'd like to see is a greater range of roles that great actors can play. The dead/doll-eyed attempt at creating a young Jeff Bridges in Tron II was woeful but I would love to see Harrison Ford have one more go as a 42 year old Indiana Jones.

June 13, 2015 at 10:16PM, Edited June 13, 10:16PM

10
Reply
J Robbins
782