"Woah, your camera really is better," was my wife's response to the first Cinematic Mode video I sent her.


We had spent a day with our kid at Governor's Island. I, of course, shot a ton of video, and once we got back and got our kid to nap, I looked through what I shot and sent it over (none of it is in this article because, you know, I'm not ready for my kid to become internet famous).

So Cinematic Mode absolutely has that power. It's immediately noticeable, and the effect is definitely, "Oh, wow, that looks nice."

As for exactly why a smaller depth of field looks nice, there are whole aesthetics articles on that. Is it because it recreates something closer to how the eye sees? Or is it because we grew up with it from 35mm movies and TV shows, and we're used to it? Is background softness just pleasing since it reminds us of soft puppy fur?

We're not going to settle that here, but we can definitely say that at least for people above the age of 15 or so, a soft bokeh is pleasing.

The other thing about Cinematic Mode is that, once you learn its quirks (which really took only 20 minutes or so), it ends up changing how you shoot. Keeping folks mostly away from the extreme edge of the frame is the biggest one, but 1/3 of the way into the frame is fine.

Because you not only have reasonable faith it's going to keep things "in focus" as you move around, you also have a good idea that moving around will feel more dramatic with the focus change. So you find yourself doing a lot more drift-ins, drift-outs, and circular moves around a subject because the results are just so fun.

Screen_shot_2021-09-24_at_11Cinematic mode struggles with more extreme shots, like this 3X shot of a hand against a far away background showing some light ghosting.

However, there are some noticeable artifacts. It seems like there is a lower resolution to the LiDAR sensor, so sometimes it struggles to cut a clean edge between the "sharp" part of the frame and the "artificially softened" part of the frame.

This is wildly acceptable for an Instagram post or a video you share with family, but will be a major frustration for anyone trying to use this to create narrative work. However, there is some hope for a fix. At least in still images, it's possible to extract the depth map from the scene for further refinement in post.

Screen_shot_2021-09-30_at_5A typical image where you might want to select the presenters skintones for manipulation.

Hopefully, at some point in the future, we'll be able to do this with motion as well.

You might be thinking, "But if it's low resolution and buzzy, why does that matter?"

The key is to remember that you can use multiple tools together in post to make selections.

For instance, let's say I want to select this performer from their background. You can immediately see I also grab a lot of background info where the wardrobe is close to the performer's skin tone. I can then go in and draw a shape to further isolate just the performer.

Screen_shot_2021-09-30_at_5An HSL key of that image showing it also grabs background performers and wardrobe.

Adding depth map info as another selection tool will be a huge benefit in post.

First off, if you want to create the "cinematic" effect you could combine the depth map with, say, a luminance key to help separate a face from the background (since often there is a major brightness difference between your subject and the background).

Even better, even if you don't want to use the cinematic effect, having access to the depth map could make traditional color grading tools faster by offering you another method for selecting objects to manipulate.

Screen_shot_2021-09-30_at_5A typical key combining both a shape and the HSL key to focus just on the performer: how nice would it be to also be able to use the depth map for selection? OR the other way, to use HSL keys to make your depth map clearer for better "cinematic" mode video?

This tool opening up for post would be massive, and conveniently Apple also has a post toolset (Final Cut Pro) that they develop for, so it seems likely we'll at least get some support in FCPX in the future.

Looking at the results that are possible also creates more elaborate flights of fancy. How wonderful would it be to see a camera company (we're looking at you, Blackmagic, which works closely with Apple and also needs a major leap in autofocus) to stick four LiDAR sensors on your next Pocket, surrounding the lens mount, to both record a depth map of the scene for use in post and also maybe to drive on set autofocus?

Or maybe, if we want to get really wild, maybe we could see Apple release an "Apple Cinema Camera" that would be something like an iPhone with much larger sensors and all the other amazing tech the iPhone has?

Https_hypebeastIf they are thinking of (maybe) building a car, maybe we could dream of what an Apple Cinema Camera might be?Credit: HypeBeast

That seems less likely, but there are rumors of an Apple Car, so anything seems possible. Right now with current tech, it seems unlikely that Apple could use luma or HSL keys to support cinematic mode on an iPhone (that would be a lot of real-time processing), but if they made something that was the equivalent of a 16" MacPro Pro with the full power of iPhone cameras attached, that kind of stuff could potentially happen in real-time. 

As a "family video" and "social media" tool, Cinematic Mode is already impressive, but even for narrative storytellers, there is a lot of promise here despite its occasionally buzzy limitations.

Have you tried it yet? Let us know your thoughts.

From Your Site Articles