Machine learning has been "coming soon" to film for a long time, but aside from a few very small examplesthe face recognition in Resolve comes to mindthere hasn't been a tool released yet that has radically changed our workflow in the way AI promises to do. There are fun experiments with AI cutting trailers or AI automated editing for photo gallery type videos, and of course, there's the running Twitter joke: "I forced an AI to watch X and it created this script," but workflow in 2020 isn't that different from workflow in 2014.

In reality, the tools used by working professionals day in and day out hasn't seen a revolutionary AI tool that has disrupted a workflow. But does there need to be one? I got to play with a pre-release version of the new color matching tool Colourlab, and it actually appears that we are about to see our first true machine-learning tool come to the post industry.


What is it?

Colourlab is a colorist built, neural network-driven application that takes your footage and analyzes it for content to enable automatic matching of a look. You bring in shots to Colourlab, grade one shot in a sequence, say "match scene," and the AI analyzes all the shots and then grades them all to match your key image. It takes around 10 seconds per shot, and it works.

Colourlab has multiple perceptual models it uses to analyze the shot, and if it doesn't work, you can go through and switch models. During our testing, there was never a time in which we couldn't find at least one model that would allow the shots to match when working with "normal" scenes. There are some outliers, which we will talk about below. If you are talking about a traditionally shot scene with consistent lighting, Colourlab is strong enough to be considered part of the toolbox.

Nofilmschool_colourlab_normal_work

To understand how radical this is, it's important to get a handle on how it is normally done. On a full budget job, you literally sit and grade each shot one at a time. On a lower budget job, you might sit with a client and go scene by scene and "set looks" on a key image for each scene, then the client would disappear until you were ready to show them a review pass. Just matching together a whole scene to the reference images would usually take 8 hours for 30 or so minutes of footage if you were moving quickly. Colourlab does a reasonably good job in about 30 minutes.

The key is "reasonably good." Even on that low budget project, you still have a finalizing session at the end where you go through and smooth everything out and make sure the client approves it all, and that will still be required with Colourlab. The brilliance of this design isn't that it automatically delivers a perfect color grade with no human intervention, the technology isn't there yet. The brilliance is that it fits into an already existing common workflow where users are accustomed to a final polishing passStep 1: Meet with client to set looks, Step 2: Colorist works solo to "spread" looks to the whole project, Step 3: Client returns to polish the grade.

It doesn't require you to change that workflow, it just changes that Step 2 from being measured in days to being measured in hours.

Nofilmschool_colourlab_normal_scene_match

It's possible to imagine with Colourlab a three-day feature grade where you spend Day 1 setting looks, Colourlab does a match that night, then Day 2 and 3 are pure polish passes. This is an insanely short amount of time to color grade a feature when working normally, even for the fastest of colorists, since matching looks for an entire scene takes a lot of time and effort and polish. And we're already used to that polish as part of the routine.

That alone makes this a great first step for integrating AI into the workflow as we deal with products that don't output "perfect" results immediately. Maybe someday there will be an AI where you click "grade" and it does the whole movie, but for now, there's this and it is a major leap forward.

Pulling a Look

If that was all that Colourlab offered, that would be enough, but it gets even better. When teaching color grading, one helpful lesson I give students is finding a reference image from somewhere in the world (Google images, a favorite movie, Instagram, etc) and bringing it to the grading application to practice matching their footage to the "look" of the reference image. Put it up side by side and see if you can make your footage feel like the image you are referencing.

Students always ask “can I copy the look from the reference image," and my answer has always been "you can't, it's just there as a guide for you to explore." Well, with Colourlab, you can.

Nofilmschool_colourlab_pull_look

Colourlab can use that same neural network to analyze the "look" of the reference images you bring in and apply that look to your grade. It does this by analyzing the content of the shot itself. So if it sees something blue and it knows that it's an ocean, it doesn't create a "blue" look since the ocean is supposed to be blue, but if it sees a person and that person has blue undertones, it can create a look with blue undertones. Since most people aren't blue, the neural network knows that is part of the look created in the grade, not in the scene. 

This is a great time to remember that a "look" isn't only about the color grade, but also the content of the shot in the scene. In the above match for O' Brother Where Art Thou, graded by Jill Bogdanowicz, there's no sky in the reference image, so Colourlab doesn't do anything to the blue sky in our source footage, even though the sky would be desaturated in the film, as well.

You'll need to try to find source images that have similar content to your shot. Some things, like the glowy backlight of the source shot, need to happen on set. Colourlab isn't going to add that backlight that can then blowout to a hazy glow if it isn't there. But the matching of the overall color palette is an impressive place to start working from.

Nofilmschool_colourlab_color_managed

Color Managed 

One vital step in the process to be conscious of is that Colourlab works best with color-managed footage. This means that it isn't going to do a great job of mixing a Rec.709 shot with a Log-C shot in their native format. However, both Resolve and Colourlab have pretty simple to use color management tools for you to tell the software what the shot is (so the software can manage it properly). You'll need to go through, select all the Rec.709 shots, and tell the system they are 709, and all the Log shots which Log they were (all the major flavors such as ARRI LogC, BMD, Canon, and Sony are represented), before starting work. In 2020, that is likely something you are doing anyway.

DaVinci Resolve Linking 

One question many will ask is "how does this work with Resolve," and the answer is seamlessly. You prep your project as you would normally in Resolve Studio, then open Colourlab and click Resolve Fetch. It brings your timeline over to Colourlab, then you work. When done, click Resolve Sync and it sends the projects right back to Resolve, occupying the first two nodes of the color grade, one for the show look LUT and the other for the grade.

Nofilmschool_colourlab_resolve_fetch

Does this Destroy Creativity? 

Grumps will often say things like, "Well, that takes all the creativity out of it," but those are the same folks who said, "What is a DP going to do with digital, it's so easy now." Of course, cinematography still isn't easy, it's about choosing images that tell a story and that is hard thorough work with digital the way it is with film. Colourlab will still require creative users to make choices about what a scene is supposed to look like, which has always been the "fun" part of color grading. Matching has always been a bit less engaging, and having an automated tool actually do it well is kind of amazing.

Will something be lost? Definitely. An early editing teacher I studied used to show a clip from an old film where one of the shots was run backward in a way most users wouldn't notice, but that created menace for a character. This was an example of the perks of cutting on a Steenbeck. When you work on a flatbed editing table you had to watch your film backward and forwards over and over, winding and rewinding, and that likely inspired that choice in the film.

Digital editing wouldn't allow for such deep knowledge of your footage and wouldn't create that same idea. Of course, we all moved over to digital editing because the benefits overwhelmed the drawbacks. The same will happen here. Yes, the colorist won't spend hours matching every shot by hand, which was traditionally a time that the colorist got to know the project well and sometimes did inspire creative solutions. The benefits are so great that the drawbacks, losing that intimacy with the film, might be worth it.

People sometimes forget that film has been "high tech" since the beginning. All filmmaking is collaborating with technology and creativity, and as the technology changes, creativity will change right along with it.

Secret Sauce 

The real secret sauce here doesn't appear to be just the machine learning, but the individual behind it. Founder Dado Valentic was a working colorist when he embarked on the journey of creating Colourlab, and throughout the process of working with it, it is clear that it was designed by someone who actually does the job. The interface feels pleasantly familiar, and the toolset is designed in a way that seems easy to integrate into your workflow. While there are other auto-color tools out there, none are this well designed, and frankly, none show anywhere near the results you get here.

A colorist's eye is going to be more attuned to matching shots that pretty much anyone else's on earth. Shots that an engineer might think "match well" are still going to require a ton of work for a colorist to truly ensure a seamless flow. Thus, it makes sense that a colorist was the one who worked to build this since it needs to meet actual colorist standards.

The project was going strong when Coronavirus hit and they had to pivot. They had been focused on building a high-end tool for facility use that would likely involve a larger hardware and software investment, but all of a sudden, everyone was working from home. So, Colourlab pivoted and ported the project over to home systems, deciding instead to go for a volume pricing model instead of a white-glove high-end model. By taking advantage of OpenML and Metal 2, the company was able to get the software running on a MacBook Pro even faster than it was originally running on its $20,000 custom PC hardware.

This is the result of three years of intense development that gave Colourlab a neural network that was ready to port, of course, lead by a designer with a deep enough working knowledge of color grading to guide the project where it needed to go. It will be out this fall for $1000/year, which honestly feels like a very reasonable price for an individual working colorist. If you are working from a home studio and doing 10-15 features per year, the time savings this product offers alone will be worth the price of admission. The drudgery of matching is often let out to junior colorist (which costs money, too), and with social distancing in place, this is a way to have the junior colorist be a machine.

But will this mean fewer "junior colorist" or "color assist" jobs since one of the worries of any AI, in general, is cutting out those "ladder-climbing" positions? Well, it's honestly more likely that you will still need assistants managing the project, but maybe this means that job will involve less drudgery and more reasonable hours. Linking the project to and from Resolve will take a bit of work to make sure the color management is handled properly, for instance.

Struggles

As noted above, we were shocked at how well it handled matching on "normal" scenes. Scenes with controlled lighting and relatively consistent exposure throughout matched almost magically. However, one of the demo scenes we use in teaching (from a short film called The River) is anything but that. Set at dawn, shot at dawn, the scene starts dark and gets light, matching is all over the place as shots from later in the morning are cut earlier into the sequence. Even harder to navigate are pickups shot months later on another camera.

Nofilmschool_colourlab_pickup_matchSony on one side, Canon 5D on the other.

It is fair to say that Colourlab struggled with this footage, but it's also important to note that professional colorists would struggle a bit with this footage, as well. This is exactly the kind of sequence that most colorists would flag on the first watch-through as deserving more time and attention in the process.

And the results with Colourlab weren't unusable by any means. It's just that compared to the more "normal" scenes we tested, where the footage required very little tweaking, more time needed to be spent going through and ensuring that the right model was chosen, and sometimes that model needed a bit of tweaking to match perfectly.

Nofilmschool_colourlab_pickup_match_2We created a 2-shot "scene" just to focus the attention on this one match. An example of the type of "habits" color assistants will learn to develop with new tech.

One interesting observation from working through this process is that Colourlab seems most focused on making sure skintones and people, match. There was a slight blue cast to one of the pickup shots that Colourlab left in because it appeared to decide that the shirt was just blue, and once it matched skintones it was happy.

This makes sense as a design decision but it's the kind of interesting thing that we would learn to look out for when grading. The polish pass on a Colourlab grade will probably involve more looking at inanimate objects than normal, as you start to rely on Colourlab to have gotten the skintone right. Since so much machine learning work goes into facial recognition, it makes sense that this would be the strongest area for the algorithm.

What we're really hoping for in future revision would be the ability to set multiple looks, say for the start and end of the scene, and then have the machine gradually transition between the two of them. This seems doable, and like something that would've tackled even our tricky dawn scene quite well.

The Future

Will every color grading platform have something like this in a few years? To be honest, probably not. Most color platforms have been working on something like auto-color for years now and aren't anywhere close. Without high-end colorists guiding the project, it's likely to never really make it where it needs to be.

Right now, Colourlab has a deep integration with Resolve on Mac, which makes sense considering both the dominance of Resolve as a platform but also Apple's deep investment in pro applications and workflows in the last few years. Apparently both OpenML and Metal 2 technologies played big parts in how powerful the software is able to be on laptop hardware. It's likely the major competitors will try to implement something similar (or try to buy Colourlab and integrate it into a bigger platform), but we're not sure if, without a colorist training the algorithm, the results would be as good.

Colourlab will be available this October for $1000/year. We're hopeful they are able to do some sort of academic license for students to practice, and perhaps even some sort of non-commercial license, but considering how it works by simply linking with Resolve, something like watermarking might be difficult. A free trial might be nice for folks who want to play with it themselves. Maybe a 1-off license where you could do a single Resolve roundtrip would be cool. However, this isn't likely going to be a tool that you use once. If you are a colorist, an editor/colorist, or own a post house, it's a tool you need to look at deeply and consider if it will save you time in your workflow, which I think it will.

Check out their site for more info.

Source: Colourlab