February 18, 2013

RED EPIC-Shot 'City In The World' Paints a Darkly Beautiful New York City in HDR

I'm as captivated by striking portrayals of urban nightscapes as anyone, ranging back to the existing-light-only Nocturne, to the aerial ghost-eye-views of FIREFLY. There's just something breathtaking about seeing the biggest centers of life and activity during the desolate, slumbering hours. Filmmaker Colby Moore has added another quieting noct-urban document to the list. City In The World lays some high dynamic range RED EPIC sights on the city that never quite gets to sleep. Check out some of New York City's dark side below, plus some details from Colby about his non-HDRx workflow.

With thanks to Filmmaker Magazine for the find, here's Colby Moore's City In The World:

Colby has created his HDR effects manually, choosing to eschew RED's internal HDRx function and instead doing it himself in RAW processing. Some of the final footage does have "that HDR look" to it, but my personal favorite material here is the night street stuff you might not have guessed was HDR'd. Colby's method in such shots creates the precise effect at which HDR excels the most: representing extremely high-contrast scenes of harsh falloffs with a bit more detail in streetlights and headlights, and greater sight into the texture of dark streets and shadows. In the video's description, Colby goes into plenty of detail about his specific process and how he created the look of City In The World, including why he chose to avoid HDRx and the potential downsides of his method:

A short and creepy montage of scenes shot around the ever-photogenic island of Manhattan -- filmed entirely in high-dynamic range and comprised of some HDR Timelapse footage I shot, along with a collection of slow-motion and normal 24fps footage processed from Red Epic-X RAW video that I recently captured and then exported as -2, 0, & +2 TIFF stacks to be tone mapped in Photomatix using a batch processing workflow. Please note that none of this was shot using HDRx -- only normal exposures from the camera post-converted into HDR using the traditional faux-HDR method of pushing and pulling the RAW file to create bracketed images.

While HDRx is a powerful tool with a lot of benefits for shooting realistic looking extended dynamic range, I chose to steer clear of it this time in an effort to avoid the motion artifacts that come with it. Especially in light of the fact that I imagine those slight artifacts would have been particularly problematic when working with a more "surreal" method of HDR tone-mapping, as opposed to the more subdued and natural proprietary algorithm Red uses. Also, in this case, the goal was to show the added "pop" you get with HDR video when tone-mapped using a Photomatix detail compressing workflow, while trying to avoid going too far over the top and completely "cracking out" the image.

Please note that my method admittedly has several drawbacks -- namely the grain from the pushed footage is a little excessive at times (a lot at others), and additionally, the push/pull limitations of the RAW file still won't allow me to capture the full dynamic range of an extreme lighting location like Times Square the way I can with DSLR bracketing of many more stops. Additionally, unfortunately in an attempt to mask some of the excessive noise, I took some artistic liberties with noise reduction, and the overall sharpness suffers a bit in several shots. There are also some flickering issues, some related to the high-frame rates I shot at for certain scenes, and others related more to the processing of the HDR itself, since preventing the ugly halos associated with bad HDR is even more tricky with moving footage.

This process won't suit everyone's HDR applications, especially in similar night conditions and if noise is a total deal-breaker for you. I do think that in this case, the dirtiness actually suits the piece very well. There are many shots here that look incredibly filmic, down to that layer of digital grime. The quality of noise has a definite 'granularity' to it, very visible in the grabs below (2:1 & 1:1 crop). Keep in mind these images have passed through a few generations of lossy encoding by this point, and the grain looks a lot better flurrying in motion than frozen:

Colby's export is available from Vimeo if you want to scope the material in full-res. If you liked the piece, be sure to follow him on Vimeo and let him know in the video comments.

What do you guys think of Colby's HDR process, and its results applied to this type of scenery? Have you used a similar HDR workflow in Photomatix? What did you think of City In The World overall?

Links:

[via Filmmaker Magazine]

Your Comment

56 Comments

Its almost creepy how real some of those shots look.

February 19, 2013 at 12:36AM, Edited September 4, 11:21AM

3
Reply
Tyler

Actually I'm shocked people use HDRx function so rarely on the Epic and continue to complain about narrow native dynamic range. Yes, HDRx doesn't work in every possible situation and it's not as good as native wide latitude, but mostly if you know the limitations you can get nice highlight protection and a bit more freedom to manipulate the footage in post. Case in point - Fincher's House of Cards.

I don't expect Instatube generation to have that patience, but you do almost the same thing with film scan. Same curves, grading and often power windows.

February 19, 2013 at 12:47AM, Edited September 4, 11:21AM

4
Reply
Natt

Once I played with some footage using a photoshop plugin called Topaz adjust and some action scripts https://vimeo.com/30270103 of course this would be to simply pop the image but there are no low light benefits to this at all...

February 19, 2013 at 12:51AM, Edited September 4, 11:21AM

2
Reply

Which action scripts did you use?

February 22, 2013 at 4:12PM, Edited September 4, 11:21AM

0
Reply
Marcus

There are some stunning images in this. Thanks for sharing.

February 19, 2013 at 12:58AM, Edited September 4, 11:21AM

0
Reply

How did he "shoot in HDR" on the Epic if he didn't shoot HDRx? If he just pushed/pulled in post, that doesn't count as shooting HDR, right? Just manipulating to HDR effect with the RAW file? Did I miss something?

February 19, 2013 at 1:24AM, Edited September 4, 11:21AM

4
Reply
Chase

Right, it was never stated that he shot in HDR, just that he shot with EPIC and used the RAW files to create an HDR effect. Anything is considered HDR when you're combining multiple frames to create that specific effect, regardless of how you got there.

February 19, 2013 at 1:36AM, Edited September 4, 11:21AM

9
Reply
avatar
Joe Marine
Camera Department

Actually it says Shot in HDR at the start of the video at 00:16.

February 19, 2013 at 2:59AM, Edited September 4, 11:21AM

4
Reply
Chris

Probably a better graphic in his video would be presented in HDR, but it's the quickest way to tell people what they are watching if they never actually get to read his description.

Otherwise if you read our post and his description it's pretty clear what is going on.

February 19, 2013 at 3:07AM, Edited September 4, 11:21AM

0
Reply
avatar
Joe Marine
Camera Department

Agreed. I really don't think he needed it though. Just show it as is, its a beautiful piece, doesn't really need an explanation.

February 19, 2013 at 3:23AM, Edited September 4, 11:21AM

5
Reply
Chris

I understand why he put it there though, I've had my videos embedded all over the place, and there are bound to be people who are going to think it looks stupid and not have any idea why - at least that tells them what's going on. It also feels a little bit like a throwback to the "Presented in Cinemascope" title cards.

February 19, 2013 at 3:38AM, Edited September 4, 11:21AM

0
Reply
avatar
Joe Marine
Camera Department

Yeah, the "Shot in HDR" graphic is what was throwing me.

I prefer "Presented in glorious Extra-Color!" Venture Bros anyone? Anyone?

February 19, 2013 at 1:09PM, Edited September 4, 11:21AM

0
Reply
Chase

are you able to explain the difference in the two techniques to get HDR? just the look or actual?

February 26, 2013 at 1:54PM, Edited September 4, 11:21AM

10
Reply
jay

It's semantics, but I would still consider it HDR b/c in essence, HDR is taking two exposures and morphing them into one image...and he's done that. It's just doing it from two exposures off of the same negative (or digital negative), instead of using two separate negatives with different exposures.

February 26, 2013 at 8:55PM, Edited September 4, 11:21AM

0
Reply
Daniel Mimura

cool. so it's the same thing? what he's done is technically as good as doing it 'officially' (?) are you able to point me to any article or info on exactly how he's done this dual exposure and how he mixed the layers?

February 27, 2013 at 5:21AM, Edited September 4, 11:21AM

0
Reply
jay

Lovely - x
Well done and thanks for sharing

February 19, 2013 at 3:58AM, Edited September 4, 11:21AM

13
Reply
Mitch

We used exactly the same workflow (triple tiff sequence in photomatix) 2 years ago with the RED One:
https://vimeo.com/58017378
A long and painful proces :)

February 19, 2013 at 4:13AM, Edited September 4, 11:21AM

6
Reply
Levi

Crazy vid, Levi! Looks great.

February 19, 2013 at 5:36AM, Edited September 4, 11:21AM

4
Reply

that video is ****ing amazing. reminds me of some of Mark Romank's stuff. Nice art direction, really well shot and cut, kudos!

February 27, 2013 at 5:49AM, Edited September 4, 11:21AM

5
Reply
jay

I understand that perhaps the purpose of quality HDR photography or videography may be to be inconspicuous, but I actually prefer the shots that are obvious. Until I saw the screen grabs in the article, I wasn't really aware of the video noise in these shots.

February 19, 2013 at 9:24AM, Edited September 4, 11:21AM

0
Reply
DIYFilmSchool.net

Cool little piece. This type of workflow would work better with a less noisy, higher dynamic range camera like an F3. Though raw, the Red just can't get enough info from the shadows and highlights simoultaneously.

February 19, 2013 at 10:20AM, Edited September 4, 11:21AM

0
Reply

Excellent point, although obviously no raw tiffs to mangle. I've seen better HDR shot with a 550D.

February 19, 2013 at 1:36PM, Edited September 4, 11:21AM

0
Reply
marklondon

So very ugly.

February 19, 2013 at 11:54AM, Edited September 4, 11:21AM

7
Reply
Matt

I'm with you. I can't stand HDR. Gee, let's make the world look like a poorly lit video game cut scene.

February 19, 2013 at 11:57AM, Edited September 4, 11:21AM

0
Reply
marklondon

So very narrow-minded.

February 22, 2013 at 4:16PM, Edited September 4, 11:21AM

0
Reply
Marcus

The word that comes to mind is "supple." It's almost like the images are printed on silk... plus some interesting depth effects.

February 19, 2013 at 12:53PM, Edited September 4, 11:21AM

5
Reply
Nathan

Use of technology or technique without vision, purpose, intent, or talent, is not art.

I didn't care for this piece at all. It's pretty lackluster for what could be accomplished with proper use of similar technique, and appears to be employing a technique for the sake of the technique; furthermore, the execution feels wanting since the grade doesn't even look that good for HDR. I've seen much better HDR material of both NYC and the Epic, for example, http://en.wikipedia.org/wiki/File:New_York_City_at_night_HDR_edit1.jpg I'd be curious what lenses were used as well; they didn't look phenomenal.

I'm half tempted to go out and shoot something better this weekend if that's all it takes to get on here and in Filmmaker Magazine.

February 19, 2013 at 1:21PM, Edited September 4, 11:21AM

10
Reply
MD

Actually, I should say, I did really like his title design.

February 19, 2013 at 1:22PM, Edited September 4, 11:21AM

9
Reply
MD

The photo can't compare to the VIDEO from the original post. Other than the noise, this looks really good!

February 22, 2013 at 4:19PM, Edited September 4, 11:21AM

0
Reply
Marcus

Is it really necessary to shoot Time's Square in HDR? Seems kind of redundant.

February 19, 2013 at 2:03PM, Edited September 4, 11:21AM

0
Reply
von

I saw the comments and wanted to respond with my own thoughts, since I shot the video :)

First, I agree the "Shot In HDR" logo was a misleading choice. I did want something to distinguish it at the front, but i guess "Presented in HDR" might have been a better choice.

Also, to be clear, I totally understand the aesthetic issues people have with HDR. To be honest, I have a lot of the same issues myself and I'm not purely an HDR shooter by any means, although, admittedly, I've been going through a HDR-phase as of late. Still, it's not something I would shoot a narrative project on until the technology improves -- rather it's just something I find interesting as a hobby and I'm fine with the fact that the look is not for some people. I'm kind of open to either side of the "it looks so fake" argument -- I still shoot and process my own medium format film a lot of the time, so it's not like I'm only running around shooting just HDR 24/7. But it is something I find fun, part of it because it put's some of the guesswork and randomness into shooting images and not being able to see the final product right away in the field. The debate of whether or not HDR as a process is lame and unnecessary has been around for years, and I wasn't expecting this video to be immune to that. And yeah -- I have to agree, given some of the excessive NR I did on a couple of shots, specifically the one analyzed above, the video game cutscene description isn't entirely unwarranted given the lack of overall sharpness. That said, I think given the noise inherent in my method (some of which was due to shooting errors I made in the field with ISO and compression choices), I did the best I could with what I was working with.

Also, MD, I agree with you, use of technology without vision, purpose or intent is not art. Although I'd argue that the shots themselves as edited with the music had a vision and a purpose. Maybe not enough to call it art, but at least I didn't shoot a video of me walking around in Times Square in vertical format with my iPhone, you know? Either way, I was pretty up front in describing it as a series of tests, so I apologize if it wasn't your cup of tea, but I hope you realize that I didn't describe myself as an artist :) That said, I'm not here to blindly defend the video -- it is what it is, and you are more than entitled to your opinion since it got posted in an online forum such as this. Still, I'm glad you at least liked the title sequence :) That was shot on a C300 EF with a Compact Prime 100 Close Focus.

The lenses used for the actual video were of varying quality -- a lot of the time-lapse stuff was definitely on my cheaper Sigma glass. Although a good portion of the video was definitely on L-series Canon 24-70 and 70-200 II's. I work at a motion picture rental house, so I definitely could've used some high-end glass for the whole thing, but for the sake of running around grabbing these shots with no rental insurance I went with L-series for most of it.

In any event, I did want to also say that the grade of the HDR also suffers a bit from the sheer time required to make changes if you don't like the initial output. My point there is, when processing still image HDR's you can simply keep re-processing every time you want to make a change and get it perfect in two minutes flat. With motion however, I tried grading images exactly as I would under normal circumstances with still HDR images, and found that some of my usual settings created new problems that weren't evident from stills. For example, it's a lot harder to prevent halos, and sometimes you don't notice with motion that one will appear until ten seconds into a shot when someone pops into frame. So you have to kind of watch the entire clip to evaluate how far you can go with the HDR, and then you have to keep outputting it 2-3 times until you find the right balance of enough HDR effect without creating halo issues or pulling it back too far. Not an excuse, but just something to consider for any interested in the workflow.

I've shot stills similar to the Wikipedia link MD sent (which btw, I do think is a very nice shot) -- if you search through my Flickr you can probably see some similar to those (http://www.flickr.com/photos/colbymoore/). The problem in comparing that still image you link to those in my video is that image benefits from the shooter actually being able to use true HDR bracketing in camera, hence getting proper exposures for every frame of the bracketing sequence, and not relying on push/pull to make the sequence. Thus they get virtually no noise. I had a few shots of the night skyline similar to that that I did on the Epic, but the only way to make the sky itself useable was to literally crush the blacks, because they were riddled with noise. Otherwise, if your argument is purely compositional, and you're just saying that's a nifty shot of the NYC skyline, then I have to agree, it's pretty cool :)

Anyway, thanks for all the comments everyone.

February 19, 2013 at 2:43PM, Edited September 4, 11:21AM

0
Reply
Colby Moore

The video is awesome and you dont need to defend yourself. Because some asshole called MD thinks your piece lacks " vision, purpose, intent, or talent" doesnt make it so. I think for a series of test this video showcases all of those. One thing is to say the video isnt your cup of tea, the other is to completely insult someone's work for apparently no reason, but hey he says he can shoot something better, if he has so much "vision, purpose, intent and talent," lets see it.

The intro and titles were awesome. The grain that I see is actually pretty filmic. Awesome job dude.

February 19, 2013 at 4:56PM, Edited September 4, 11:21AM

13
Reply
carlos

Hey Carlos,

I think you're out of line calling me an asshole, I would appreciate an apology. I think I was pretty clear that I was expressing my opinion and why. The issue is not so much with someone else's work, but with someone else's work being praised, and described as breathtaking when it isn't. In fact, the discussion of the technique seems to misunderstand what is going on, how cameras and R3D files work; in essence there is no reason to do this this way, and it is basically no different than doing a bizarre grade in Resolve or Baselight.

I'm going to write a more substantial response to Colby since he's taken the time to respond, but next time consider someone's answer rather than attacking one part and defending another with blind sycophantism.

February 20, 2013 at 11:04AM, Edited September 4, 11:21AM

0
Reply
MD

What's it to you if someone finds it breathtaking? not up to your standards that persons reaction to it is somehow invalid? Bellow you say you dont understand why people put these lenses on an epic. What isnt there to understand? People make choices and you have no idea what went into those choices. Theres no problem stating your opinion, "I think this would have worked better, what if you tried this instead" that is constructive criticism but you make it seem as though there is an inherent problem with his technique and method and thats to say you think there is a problem in the way he in experimenting! that to me is ludicrous. You would have done it different, offer suggestions, make your point. But no need to attack the person's work or method simply because you do something different especially considering you dont know his circumstances.

Regardless my apologies for calling you an asshole :)

February 20, 2013 at 3:06PM, Edited September 4, 11:21AM

0
Reply
carlos

Thanks for stopping by Colby. Love the material and I appreciate the detail you've gone into regarding your process. Happy to share :)

February 20, 2013 at 12:01AM, Edited September 4, 11:21AM

0
Reply
avatar
Dave Kendricken
Writer
Freelancer

Thanks for sharing the link, Dave. And I'm happy to go over the details and answer questions about the project. It was never meant to be a perfectly polished final project, so I think this sort of discussion has been very helpful for me to learn more myself and it definitely helped me elaborate on some of the details I forgot to include in the video's description. Take care...

February 20, 2013 at 3:47PM, Edited September 4, 11:21AM

3
Reply
Colby Moore

Colby, I responded to you at length below. I must not have clicked reply to your original comment before submitting.

February 20, 2013 at 12:27PM, Edited September 4, 11:21AM

1
Reply
MD

I like that noise/grain.
From my little experience, in this sample I see the grain in the mid-tone areas (clean blacks and highlights) as oppose to the very dirty blacks that I'm used to see in DSLRs.
Is my observation mistaken?

February 19, 2013 at 5:12PM, Edited September 4, 11:21AM

0
Reply
Martin Calvi

Thanks guys -- and yeah Martin, you are correct, although one thing to note is that I did add some grain overlays on some of the shots, so that may be part of the slightly more pleasing grain pattern in the final edit. The actual grain I was referring to in the HDR processing was much uglier, and definitely present in the blacks, but i tried to avoid shots with an excess of it as much as possible, and/or crush them some in shots where they were present. And then I added more filmic grain overlays to try to mask the remnants of the uglier grain beneath it. So I guess in fairness, with the still image analyzed above, a more truthful example would have been that image before the final pass with film-scanned grain overlay.

But for the most part the only shots where the push/pull grain was really bouncy and in your face was the night shots on the street away from bright areas like Times Square, and I didn't use too many of those anyways. For the most part, if you shoot in what would be considered normal low-light shooting conditions (where there is some street light present), the grain isn't too much of an issue even with the HDR cranked up higher than I used it. And you always have the option to crush the blacks a little or as much needed through Photomatix build-in black clipping function. I definitely found that helpful in small doses, although obviously if you over-do it you loose a lot of detail in the blacks. But I guess my point, after all that rambling is that even real HDR can't create light where there is none. And since I was only working with three exposures I still wasn't expanding the captured dynamic range to the point that the camera can see in the dark :)

February 19, 2013 at 5:56PM, Edited September 4, 11:21AM

7
Reply
Colby Moore

Simply amazing.

February 20, 2013 at 10:20AM, Edited September 4, 11:21AM

0
Reply
Danny Sawyer

Hi Colby,

Thanks for taking the time to respond. I may sound overly critical, but I'll explain why, while remaining open to the idea that I'm missing something.

The issue as I see it is that there is absolutely no reason to perform this process with RED footage, and part of the problem may be with the way members of the community or authors of different publications are reacting to your film, rather than any problem with what you did or how you did it. So perhaps its not you I'm really criticizing, but others.

Sure, its not walking around with an iPhone and I'm willing to accept it for what it is; I guess what I'm not willing to accept it as is what the article above described it as (breathtaking), or what others here are describing it as and I'm seeking to inject a dose of reality to the blind praise which has been offered. No one here really described it as a series of tests, so my bad for not looking at it that way; I thought it was intended to make a statement about NYC, and when I saw the title design I thought it had the potential to, and then it was just a bunch of clips set to music so I was let down.

From a technical standpoint, there are a lot of issues, and this is going to meander for a bit. When grading raw R3D's in a Baselight or a Davinci, or even After Effects, you have access to the metadata parameters, the same as using a LUT but often easier, and can create multipass color corrects similar to the old school days of doing a sky pass, or hair pass, off film. Using nodes (or even layers and effects in After Effects), you can create +1, +2, -2, whatever you want alternate passes, and then use boolean operators or more complex methods to combine them and get a similar tone mapped effect but with the advantage of being able to manipulate any component at any point, including tweaking your base under and over grades. This removes the control problems you had and eliminates the time/effort/need to render additional passes.

As I see it, you created more work for yourself, and then got an inferior product; yet somehow people are touting this as breathtaking (yes, their words not yours) and hence my criticism.

There are other elements of the grade that are just sloppy (and not an inevitable result of the HDR tone-mapping technique, its that either there are issues with the footage itself, or the secondaries are sloppy, too blurred, or not keyframed correctly).

The whole point of correcting off of R3D's is you can create "HDR" looking images out of traditionally shot footage by pushing the range of the image, and windowing certain parts of it, and then combining multiple passes. Locked off shots are of course much easier, but its doable with anything it just might involve more rotoscoping or keyframing to get a good result. Almost everything we look at on screens today has a component of HDR and shows an image with far more stops than the camera can acquire, so I'm not criticizing the "look"; I like the look of HDR, I'm being critical because this doesn't look like good HDR to me.

Your image looks soft and glowy, and not in the traditional good way; its in the way that I used to try and make bad DV footage look better in HS by blurring it and using a soft light transfer mode. See Ink.

Furthermore, I just don't understand the point of putting lenses like that on an EPIC and going around shooting this stuff. Canon L series glass is not very good when shooting anything in motion; the 24-70 is soft, both the 70-200 and 24-70 are incredibly slow for what you were trying to do at points; night shots of New York at 2.8? Why? I would not even go out with something less than Zeiss ZE/ZF's primes at around 1.4 and that's on a DSLR where you have better low light performance. I'd still hate it because you have all the problems that come from non-cine glass, in a situation where barrel distortion and softness is going to be noticeable and exacerbated in every single shot because there are building.s You're shooting at a max of 2.8 with soft glass, which has all sorts of distortion and aberration and frankly, making the camera look bad. Get some of the new Canon T.9 glass, or Super Speeds if you can't get Master Primes. 5d's on Zeiss ZE at 1.4 look better than this. In fact, I have some footage from Times Square, NYC, which when I have a moment will pull and post. Skyline timelapses wide open on super speeds on any Red look like HDR with just some curves thrown on them.

$100k in rental insurance costs $1000/year, and it doesn't make sense to work at a rental house and be able to take the body out without insurance and not the lenses; unless you're at a camera house with no lenses. The softness is exacerbated by shooting scenes which have incredible detail, but you can't see any of it because the lenses are soft, and the grade is haloing all over the place.

You also say things like you made shooting errors in the field due to ISO choice and compression. Compression, sure, but ISO choice is metadata. Unless you're telling me the ISO you used caused you to blow out highlights or under/over expose with your lens? But that would be simple exposure mistakes, which leads to me question the craft.

Finally, this was all made worse by what I perceived to be a poor music choice, bland editing, and the use of about 20 recognizeable stock overlay elements in the intro and outro title fx.

The title's I really liked were the "City In The World" which showed you thought about what you were doing. They showed the eclecticism of New York, and had purpose, intent, and vision supporting the craft of properly shooting and placing them; if you're capable of that, why not do the rest of it correctly?

This is all in all, why I said I thought it was technique for the sake of technique. I understand why you can't use HDRx because of motion artifacting and that traditional dslr bracketing doesn't work. I'm sorry if this is harsh, I just don't get what the point of this article is, and am concerned people will go out and try to do something like this when there are better/faster/easier ways to get a similar if not far superior result.

Here's what would be really cool to me— use a beam splitter with two identically equipped Epics with something sharp on them like a Master Prime, and bracket them +1-2 over and -1-2 under; the cameras have enough lattitude that you should have the middle just fine. Then combine those passes and you would have an absolutely stunning image. I would totally get down on a project like that, and I bet it would look phenomenal.

February 20, 2013 at 12:26PM, Edited September 4, 11:21AM

6
Reply
MD

Hello again MD,

First let me say, I agree on a lot of your points. I'm not responding here for the sake of complaining that you don't like the video. Just trying to give more insight to you and others about the project. The video definitely isn't without it's share of faults. It's the sort of thing that I'd love to do again sometime to really provide more of a best case scenario.

I fully understand how to use RAW files to pull more info out of a shot. For example, I know I could've used power windows to bring back some of the detail in parts of my scene such as the billboards in TS. Still, even with the most thorough understanding of using rotoscoping and keyframing, with a wide shot of Times Square, trying to use power windows on every billboard and overexposed lamp (especially on shots with movement) isn't practical for the average viewer reading this site, and it's also potentially even more time consuming than my already time consuming faux-HDR method. Thus, I still think using an HDR method to try to get the most dynamic range out of scene where you absolutely need it has it's merits. Maybe not the exact way I edited or processed it -- but the workflow itself stands as a potential solution to a problem that others could explore. We can debate my methods, how complicated they were, or whether they are aesthetically pleasing -- but I don't think you can say "this could have been done exactly like that twice as fast whilst windowing the RAW files in Davinci or AE." Tone-mapped HDR is a way of extending the range on the entire image as a whole, whereas windowing is only a way to spot correct. Not the same thing. If there is a way to do have Resolve or AE run an HDR or extended dynamic range algorithm that basically tone-maps the entire RAW image pixel by pixel, then fine. But at that point you are just processing it as HDR in real-time anyway -- something I would have of course preferred to have done if I knew a software solution. It's not like exporting three individual TIFF sequences for each shot was a fun process :) I'm no expert with Resolve -- so if there is an easy way to do exactly what I did in Resolve or After Effects, by all means, send me a link to a tutorial and I'll happily research it for the next time I undertake a project like this.

But none of the other methods I know of -- be it HDRx or Magic Lantern HDR would have prevented the aforementioned motion artifacting I was trying to avoid. And the only other alternative -- using a full on 3D rig is something I had neither the time, the resources, or frankly the desire to do. I agree it would be a more accurate test since it would be closer to real HDR (even though there would only be two exposures), but it would require me to have another body present to lug a 3D rig around the streets of Manhattan. So hence my decision of workflow. For a normal film, I agree your method of color correcting and editing the RAW file bit by bit, portion by portion is ideal, as it preserves the natural look of the image. But for the sake of my tests, and for the sake of those people who do happen to like the more processed look of Photomatix HDR (I'm on the fence myself), I was very clear that the intent was to test a single RAW file, bracketed in post, to process as tone-mapped HDR. If you don't like the way it looks, fine. We can agree to disagree. Or in some aspects, we can even agree, since I am far from calling it a perfect output. I literally described below the video when I posted it the downsides of this method, and I'm fully aware this isn't something you would say, shoot a commercial product for a client on in its current form. It's noisy, it's slightly soft at times, really soft at others where I over-applied the noise reduction to compensate for excessive grain.

In terms of the lens choice -- I'm well aware that a $2k Canon DSLR zoom lens isn't the end all be all of optical quality and sharpness. And some of the shots that I think you are seeing the barrel distortion on, were actually on my own older Simga 10-20mm, which is even cheaper and notoriously bad for barrel distortion. Are Master Primes or Summilux-C primes sharper and more suited to moving images? Of course, they also cost $180k for a set. And yes, there are other cheaper lens options like ZE lenses I could've picked (although my company doesn't actually own those), but honestly I was looking for a simple set-up for shooting some tests and there were L-series zooms available near my desk :) I understand that since this video was posted in a technical forum and received a lot of views, I can't really say "it was never meant to be judged this critically" -- but at the same time, let's be honest -- it was a NYC street scene montage intended for output to Vimeo. I don't want to be one of those people who arbitrarily limits the quality of there final output most of the time, but in this instance I was okay with shooting L-series glass. I wasn't shooting this video for anything close to a cinema projection -- especially given that it was very obviously a field test for a method that already has many stated issues at this point in time. And yeah, sure, I can easily get insurance and run around shooting with any lens under the sun if I so choose, but this wasn't a paid gig for a client and it was just me lugging around the gear myself in a shoulder bag -- so I went with the lightest and most available option.

Furthermore, I think it's kind of hard to even say that you or me can judge the sharpness of the glass properly at this point anyway after the multiple layers of grain and noise reduction I already said that I applied. I even stated that I went too far on NR on specific shots (the example shot above being the best example), but I understand the argument can be made that I should have went back and fixed it. Again, these were tests, and I'm fine with the criticism given, but what I uploaded is what I uploaded and unless I take the time to go back and fix it, I kind of have to live with it, no? I think that time is better spent moving onto the next project, but I will be sure to consider your criticisms on the next project I do.

Anyways, with regards to the L-series lenses -- hundreds upon hundreds of awesome videos (most of them far better than mine), have been shot on those same lenses, and many filmmakers have easier access to that glass over cinema lenses, so to just write it off as soft to the point of being unusable would make no sense to me. Not saying that's what you were doing necessarily, but I'm just pointing out that while L-series lenses may not be Master Primes, and of course they aren't really "available light" lenses i terms of stop, it's hard to argue that they haven't been used to make some pretty decent looking videos. I'm fine with the criticism of the glass, just want you to be aware that the optical quality of the lenses wasn't really the point of this test, and honestly even if it wasn't a test, and it was a final product, it's not like it wasn't like I was filming lens charts anyway. Not downplaying the importance of the optics -- obviously as a shooter you have to consider it and always try to achieve the best quality within your budget and timeframe. But in this case, I chose what I chose and I don't think using it affected the ability of anyone to judge the HDR processing itself, which was the whole point of the test.

Anyways, thanks for the discussion -- I personally like the music and I think the editing, while pretty straight-forward and evenly paced, wasn't "bland". And the stock effects -- well, OK, you caught me :) But I didn't expect that everybody out there would love the look or style -- and that's just fine by me. That sated, I'm not too worried that the article is going to create a new wave of HDR fanatics copying my method though. And if it does, I think they'll probably be smart enough to adjust the workflow to something smarter, faster and more efficient as they go along. Isn't that the point of this whole process? I'd hope nobody would read this article and then go shoot something exactly the same way I did, when even the video's description covers the inherent flaws that need to be overcome before it could be used on commercial productions.

February 20, 2013 at 3:44PM, Edited September 4, 11:21AM

0
Reply
Colby Moore

I think we're having a disconnect on how you would use color grading programs. A good beginner example is this Technicolor 3 Strip tutorial in Resolve: http://www.mynahmedia.com/2013/01/3-strip-technicolor-look-in-davinci-re...

If you use similar principles, yet change the effects/filters, etc you apply to different nodes, you can get the same type of tone mapped faux HDR image without any pre-rendering. This is just fact. There are better, faster, more controlled ways to globally process the image and this is why I started commenting on this article in the first place. When you say you think I can't say it could have been done exactly like that twice as fast, you're wrong. It is faster, I could do it in a few minutes if that. The power windows, secondaries, and mattes, are bonuses which would've allowed for even more control, and selective minimization and maximization of HDR type effects. It is not helpful for NoFilmSchool to be telling people who likely don't have the knowledge base to know any better, how to use a technique which will create more work for them and I'm sorry to say, is basically a waste of time, unless you are truly claiming there is something the program you used can do, which others cannot, which I see no evidence of. After Effects, Resolve, and Baselight, all have ways of doing this without pre-renders. Again, as I've said before, there is no real "point" to such a test; it's basically testing a convoluted color grade. If you like the look, great, if not, no big deal, but to think of it as "hdr" doesn't really make any sense.

Also, windowing small parts of times square etc... would not be necessary. It's more about taking entire buildings and separating them from the sky, etc... so that when you've pushed your highlights too far to get the building up, you still have a gorgeous rolling sky. This is a bit different than the type of nuanced grade you're talking about, and makes me believe you haven't sat in a high end color grading session to see how a great colorist pushes parts of the image.

Interestingly enough, now that I think about it, HDRx could be used to give you a better image if you processed the HDRx frame as though it was taken later in time, and only used the parts of the image that were photographically consistent with your base frame. A node shifting the exposure to match the base, followed by some form of difference/dx node to create a matte and then use that matte against the shifted layer on top of your base, might get you a cleaner image than HDRx necessarily would on its own, though yes, it would not be as perfect as a 3d rig.

As to your lens selection, fine, thats your prerogative. I would personally never shoot a test on an Epic with glass like that. Why not shoot it on a DSLR? Or F3? It's the juxtaposition which makes no sense. You have a sensor which is resolving past the ability of the lens. You trade on the name recognition of the camera, but are handicapping the final product. And yes, I can judge the sharpness of the glass properly because you started with a 5k image and ended up where you are; that's absurd. Cinema glass would simply look NOTHING like this. I guess the only thing I can think of the top of my head are SNL openings from the past few years, which were shot on varying qualities of camera and glass. Whether you want to take my input on the next one is yes, of course up to you. I would suggest you try to achieve the effect in Resolve. It should be trivial.

The 24-70 really is soft to the point of being unusable, even more so when you're doing a grade like this. The II is less so, but both it and the 70-200 are still 2.8s. You can't shoot on the Epic at night at 2.8. To me, the test wouldn't be meaningful because you have to control everything for the post process. You give yourself the best quality input (or at least realistic input), so that whatever image destruction or degredation occurs, you can localize to your post-workflow. Otherwise its not a meaningful test because it could 5-10 different things that went wrong on the shot, and you don't know if it will cause a problem when a client is picking up the bill, or if you can say to them, yes, I can do this. Right now it looks to me you're saying, no, I can't do this. I'd say one could with better glass, and a better grade.

Music and editing are of course personal choices, and again, this wasn't a criticism of you, such as a criticism of those lauding those choices. You're not the one that said this is awesome, you're the one who said it is what it is and these are my choices and I'm happy with them. I'm not going to target you, but I will criticize people who offer bad opinions.

Finally, and this will sound harsh, it would never be used on a commercial production because its basically the wrong way to do it and you gain no benefit from the process.

Thanks for taking the time to respond and engage. I appreciate and respect that. I hope I've helped you with some input for the next one,

February 21, 2013 at 2:59PM, Edited September 4, 11:21AM

10
Reply
MD

MD, I think you are discussing aesthetics based off technical grounds. HDR effects like this are well-documented and increasingly out of the box on many photographic cameras, and what has been done here has been a long time coming - it's not like it's a new idea, or some completely new effect, it's just a new tool in a filmmaker's bag.

Mr. Moore did a great service to us by shooting and sharing his workflow. If you don't like it and feel like you could do better or differently, please, share those insights, however the HDR effect that Mr. Moore replicated here had never been seen before as well-done, or with a published workflow, to my knowledge. For that I owe him my thanks. If the HDR aesthetic is not to your liking, that has barely anything to do with Mr. Moore's video, considering it was a technical demonstration. For Mr. Moore's work, I applaud him.

February 21, 2013 at 3:37PM, Edited September 4, 11:21AM

0
Reply

It hasn't been published because its trivial to execute in any node or layer based color grading suite. You would never need to do what he did, nor is there a benefit.... it's very bizarre as a technical demonstration. There are all sorts of grading techniques; say, inverting the blue channel and overlaying it which are variants of tone mapping and create an HDR image without glows, distortion, etc, and not going to tiff sequences actually allows more control. Here was something done to interview shot on r3d in like two seconds, using an Alex LUT and an blue channel overlay, http://imgur.com/KmLNNFY Its not HDR, but it gives people that "video game" look which could be cool in some purposes. Hell, I think it might've even been shot on a 70-200 L glass, but this node structure could be dropped onto any clip from the shoot and tweaked for final, as opposed to some lengthy photomatix process.

Also, so we're clear, I've provided examples of how to do it better, or differently, so you responding suggesting I haven't implies you haven't bothered to read our exchange. Its nice that you appreciate his outline, but I'm telling you, it is a waste of time. The abov

You're shooting on an Epic, if you're not getting close to what the camera is capable of doing, then I think a discussion of aesthetics on technical grounds is fair. My point is that this type of HDR is done in most commercials and films all the time to bring windows and sky back in, but the look is just not pushed so far as to take on the surreal look HDR photos often have.

February 21, 2013 at 4:50PM, Edited September 4, 11:21AM

4
Reply
MD

Well I'm going to stop debating you about Resolve since I'm not by any means an expert with that, and can't speak to it's ability to tone-map a single RAW file as an HDR image with an effect algorithm identical to the one I processed mine with in Photomatix. But saying that those other software suites provide "HDR style effects" is vague to me and doesn't confirm for me or anyone else that it provides actual native HDR processing support. I'm admittedly not up to date on whether or not native HDR processing is available in these suites because I use a Photomatix workflow for stills, which was all I had done HDR in until this point. Hence i went with what I knew and given I had no schedule to meet, the slow workflow wasn't a problem for me. I'll try exploring options in Resolve myself soon and report back, but from a quick search, I see no examples of "single RAW file tone-mapping" support in Resolve. I do see that it supports high-dynamic range imaging, but I can't get an immediate answer at the time of this posting on if that is only for a more traditional workflow where the exposures would be captured individually or if it also works for using a single RAW file (one of the requirements for MY scenario because we already established I had no resources to lug around a sterescopic rig). If it doesn't have that single RAW file HDR-processing support, and you still need to spit out individual TIFF sequences of your exposures to do the processing in Resolve anyway, then at that point it comes down to a user preference between Resolve or Photomatix. Sure, you can argue that at at least by doing it in Resolve you could apply final color correction at the same time, but what if the user prefers or only owns another suite? Some people are still likely to use Photomatix anyway because of its prevalence and because it probably still has slight processing nuances they like regardless of whether i personally took advantage of them, so I'd still argue it has a place in the realm of potential workflows in spite of your abhorrence of it on a basic level That said, I will make no further efforts to convince you after this post. I promise :)

As I said, I'm obviously aware that I can adjust the shadows/highlights and contrast on my RAW file to achieve a higher dynamic range than with the native image the camera spits out. I know I can also spot adjust parts of an image using the various tools you describe, and that I can also sharpen and do other things to make the image pop separately in Resolve. Again, while pushing and pulling the RAW file and playing with contrast and other "HDR like effects" is essentially a form of tone-mapping in itself, and I actually AGREE that it would no doubt look more realistic and filmic, it's still not the same as tone-mapping the entire image using local operators like most standalone HDR apps or plug-ins allow me to do. I'm open to the possibility that this effect is already native in Resolve, and if it is -- fine. That's a better workflow. But the tutorial you link to provides no evidence of such an HDR algorithm being native, and since I don't have Resolve, I can't RESOLVE this argument myself without further research. But the fact remains, local tone-mapped images have their own distinctive look, and while you may hate said look with a passion, it's not out of the realm of possibilities that someone else likes it and wants to achieve that look. Even for a commercial project. Your thoughts and opinions, while technically sound and in line with the mainstream cinematographic mindset, don't represent those of every company producing paid content in the entire world. Of course you are more than entitled to your opinion, and I'm obviously not going to change it at this point. Sure, the disdain for local-operator processed HDR as a gimmicky and cartoonish look may be the most prevalent opinion -- but it isn't the only opinion. And the fact remains -- some of the characteristics of the images can't be created identically without processing that way. One such example, the much maligned halo-ing effect, is one of the main characteristics that I think we can all agree is less than desired. But the painterly effect and the ability to better control the micro-contrast of the image is something that others find pleasing. I'm not even sure I would count myself among them, as I actually prefer to use newer, more advanced contrast blending methods like "Exposure Fusion" in Photomatix or enfuse which are a lot more subdued and realistic in their output, but for whatever reason, this time I chose to pursue the more traditional "Details Enhancer" style to match my already filmed HDR timelapse footage, and since I don't find the two methods blend that well, I wound up shooting it the video footage the same way to match. I'd actually like to shoot some more tests down the road processed completely with Exposure Fusion or enfuse to see if certain detractors to the more cartoonish style might find it more aesthetically pleasing. I know I do for most scenes, but I haven't tested it with video footage, so I'm not aware what, if any, potential downsides there might be to processing them that way with motion.

And just to clarify about the term local tone-mapping -- my earlier description of my form of processing as "global" was actually misleading in hindsight. I meant global in the sense that it required no power windows or nodes, etc. But even though processing in an app like Photomatix allows you to use sliders to adjust the look of the whole image at once, the processing itself is actually based on an algorithm that calculates each pixel individually in relation to its surrounding pixels and preserves local contrast in a way that global processing with most simple effects absolutely does not. Whether this is good or bad has been debated for endless forum posts across the web. But this is traditionally referred to as "local operator" tone-mapping and while I'm pretty sure Resolve does have that sort of HDR processing support for INDIVIDUALLY exposed video files to be combined (traditional HDR like you would shoot with a mirror rig), I wasn't aware of a provision for processing a single RAW image using what we consider to be "local operator" tone-mapping at the time I edited the video so I didn't pursue it. To be fair, even though there is such a feature in Photomatix, it doesn't have .R3D support -- hence the need to use Red Cine X to spit out TIFF sequences instead of just opening straight into Photomatix. So really, either method would still require the image sequence exporting to occur with a single RAW file workflow unless one of the suites has BOTH features. If my info is out of date and Resolve does have support for single RAW file tone-mapping (specifically as locally tone mapped, and not as some sort of exposure fusion algorithm), then, well, duh -- that's a better workflow and my method is irrelevant to those with access to that software. I honestly haven't tried it, so I'm not going to pretend I know one way or another. But if it's such an easy and efficient workflow, it sure seems like a simple search would yield results on how to do it and Vimeo would surely be littered with examples.

I do agree HDRx, used in the selective method you describe above would be useful to achieve higher DR whilst avoiding motion artifacting on moving parts of the frame. Also, I'd like to note that it was never my intent to dissuade anyone from using HDRx. I simply didn't want to encounter motion artifacting which I thought might be even more pronounced when local contrast was enhanced the way I planned on processing my footage in Photomatix (local tone-mapping and not through exposure fusion). Plus, I was probably a little too hung up on having the ability to have "three" exposures, when I could have easily used HDRx processing to get a good high and low exposure, and the middle would have probably been preserved well given the already mammoth DR of the camera itself. But again, to be clear, HDRx is a more realistic looking form of processing, similar to Exposure Fusion, and that wasn't what I was matching to my timelapse footage. So in the end, I STILL would have went with Photomatix for this particular project. In summary, I chose what I chose, and I'm done defending it for now :)

I can tell you that while I agree shooting with lenses that can't resolve to the capability of the sensor itself is silly as a general guideline, and I definitely didn't do it this time to achieve an "artistic vision", it is again a choice some may want to make, and you might want to consider being less black and white with your outlook on cinematography in general. I know I can't speak to the artisitc merits of soft lenses, because I'm not officially an artist, but I can tell you we live in a world where tons of people are shooting on vintage cinema glass that doesn't necessarily resolve to the specs of modern sensors, and they are producing footage that is plenty watchable and narratively intriguing. You may have found my piece dull -- fine. But to say any lens of any type or mount is "unusable" for commercial or narrative work comes off as elitist. Not ideal? Sure. Noticeably soft? Sure. But not unusable. If it allows you to focus light onto the sensor, somebody out there will use it. Sure, in most of the cases where people use glass with inferior sharpness it's because they are accepting a trade-off on that front in an effort to achieve specific flare or contrast/color styles with vintage glass, but for some of them (like me in this instance) it may just come down to shooting with the lens that was readily AVAILABLE. And that was the case for me. So if the charge is that I shot with lenses deemed inferior for the Red Epic's sensor, then scientifically I suppose I have to plead guilty. And I can't argue that the grade does nothing to help hide such sharpness issues (if anything it simply adds to them). But I think it could have been worse. I could have shot HDR "Lens Whacking". Or HDR with a zone-plate lens. I applaud your high-standards for optics, but I guess I don't see the point in defending the lens choice of a video you've already outwardly admitted you're not a fan of. I will note that arbitrarily saying you can't shoot on an Epic at night at 2.8 under available light is not really accurate. You may not be able to shoot on the street under only halogen lights, but I can assure you that in many areas of tourist friendly NYC you can get useable images in NYC at a 2.8 at acceptable ISO ratings. And a good portion of the areas I filmed in were well lit as such. Furthermore, I also used a Canon 85mm 1.8 for some shots you saw, so you're evaluating my choice of lenses under specific scenarios before I've even given you all the details on what I used. But I'll agree that I could have shot a sharper image with T1.4 cinema lenses. That much is clear, of course.

Finally, to clarify one other thing, I never said I couldn't achieve the look I wanted or that a client wanted. I said as a general rule, the current limitations of my method not work for every shooter or client's needs. That might include budget, speed and available software/equipment limitations. I myself am more than confident that I am capable of using my discretion to judge what caused any of the issues that I want to change with the footage should I shoot something like this again. I may not have finalized the definitive workflow for post, but that was the whole point of my test. And nobody ever said there couldn't be MULTIPLE tests, did they? Sure, the video got a decent amount of exposure on Vimeo, but I never stated it was a polished, final product, and I think most of the articles covering it pointed out I made notes of issues with the workflow.

Anyways, thanks again for the conversation -- I know we are at odds about my method, and frankly, my entire workflow with this project, but I appreciate a lot of your criticism and will indeed keep your thoughts in mind in the future. I may not adopt them as gospel, but I don't think you would expect me to :) I also understand your concern that one person's methodology will be lauded and held up as the definitive method and become a fad, but I caution you to give the creative community more credit than that. I understand with the prevalence of computer-enhanced perceived cop-outs in the digital cinema age, like HDR, or 3D, or Magic Bullet color grading, it's easy to feel animosity towards methods that seem outwardly hollow and aren't your cup of tea. But I assure you, the vast majority of us don't just copy tutorials and methods they learn online step by step to try to achieve easy fame -- most of us are reading them and trying things to adapt and learn and to have fun -- just as I'd imagine you do when you when you shoot something for a personal project. I know personally I won't even be shooting my next video in HDR, and when I said this was a test, I really meant it was a test. My decision to email it to Gothamist was one step ahead of my technical confidence in the project :) But my next video may be a test too -- and I'm not gonna feel bad about it :) If I do shoot HDR again down the road, I definitely won't be going into the project thinking I've found the end all solution to capturing all video in the best and prettiest manner possible.

February 21, 2013 at 8:32PM, Edited September 4, 11:21AM

0
Reply
Colby Moore

I will say that the sample you provided in the last link MD, while looking like more of an exposure fusion than a details enhanced HDR file, is aesthetically pleasing to me. It's a more subdued and natural look without as much micro-contrast or glow and I agree using a method such as that would seem beneficial for those trying to achieve a realistic look. While I'm not conceding that this is "exactly like" what I shot, I'll agree that it's much simpler to achieve and for people looking to do a more natural look, it might work nicely. I'll try it out myself sometime soon and report back.

February 21, 2013 at 8:48PM, Edited September 4, 11:21AM

0
Reply
Colby Moore

MD, If I'm following you correctly though, you're just suggesting a simple channel inversion and overlay, and I assume also a blur to mimic the "bleeding" of edges a little. There are many tutorials around for doing this with stills as well. While that gets you the rough aesthetic equivalent of an exposure fused HDR in terms of contrast adjustment, I don't see how it's actually adding any further dynamic range to the image, unless you are using other exposure tools to bring back specific parts of the image. But if I want to bring back skies or bring up shadow detail, I don't see how the method you show is able to pull back any info that wasnt there in "2 seconds". Sure, it's a RAW file, so we know it is there and that you can thus still adjust on a case by case method in the image, but unless I run an actual HDR algorithm I don't see how I'm getting the additional extended dynamic range without creating some sort of mask to seperate the areas I want to adjust. But with actual bracketed exposures and an HDR process I can actually squeeze in more info across the whole image without further adjustment to skies or shadow areas through masking, or broad adjustments to shadow/highlight toning. I'm not saying your method doesn't work -- just that I don't see how that would be considered an actual HDR image. I realize that might be fine if your scene more or less fits within the DR anyway, but with shots that have extreme DR it seems like HDR still has a benefit in that area, no?

February 21, 2013 at 9:44PM, Edited September 4, 11:21AM

0
Reply
Colby Moore

I really appreciate this discussion and your two inputs Colby and MD.

As for the link MD shared (3 strip technicolor technique) I think it was simply to show us how we can create three layers from a raw file in Resolve, adjust image settings independently on these layers and then merge them in a final image. A technique not so dissimilar to what a post HDR worlflow requires. A node-based HDR process: your three layers would be your -2, 0,+2 expositions.

February 22, 2013 at 2:50AM, Edited September 4, 11:21AM

10
Reply
Brody

Right. I mean I can see how such a process would work. I haven't done it in Resolve myself, but I've done similar processes in Photoshop with stills, and it's definitely a way of achieving a more natural looking image but you generally still wouldn't have as much control over the image to control things like local contrast and micro smoothing, which is helpful for noisy images.

All that said, I'm not very experienced with using nodes, or what additional image controls this gives you, so sure, if there are ways to replicate all the functions you can do in HDR software through combining other effects then I'm sure you can get close to an "HDR look". But the fact remains that doing a merge of three images still isn't technically the same as tone-mapping an image to have the software calculate every pixel and its surrounding characteristics to account for local contrast changes. This method allows you to create much more vibrant HDR's which tend to look fake and cartoony if taken too far, but can be controlled and managed using the various sliders Photomatix and other software programs provide us.

Anyways, thanks for the discussion guys. I'm definitely interested in learning more Resolve, so I'm more than willing to play around with MD's method and see for myself the benefits of that workflow. I encourage everyone else to try it his way as well if you think my method is too time consuming or unnecessary.

February 22, 2013 at 8:13AM, Edited September 4, 11:21AM

0
Reply
Colby Moore

I'm pretty busy today so I haven't read through your full posts. I will later, and if appropriate, respond in detail.

However, I did just want to say modifying local contrast or "micro smoothing" are pretty easy to do in any color grading suite. If I understand the algorithms correctly local contrast is basically a variation of using an unsharp mask. They're just not called the same thing photomatix calls, and if you want a simple way of doing so, there are specific local contrast and similar filters, plugins, and addons that use varying mathematical approaches. In general, an unsharp mask is used to detect edges, and you can use this mask with a blurred version of the original image to create the exact same thing that local contrast does; this is what I was referring to with modifications to the technicolor 3 strip. I've never used Photomatix, but it sounds like its the KPT version of something that can be created pretty quickly if you know what you're doing. My guess is I could go through Photomatix feature list and find a pretty quick way to rebuild what each tool does with a few nodes. I'm open to being wrong if there are some particularly exotic or proprietary equations being used, but it looks to me like they're basically short cuts for old school 2-5 step processes in Photoshop.

Also, FYI, photoshop lets you use video clips, so you could do this all there if that's better for you, though you'd then want to -2,+2 out of RedCine.

February 22, 2013 at 4:31PM, Edited September 4, 11:21AM

0
Reply
MD

You use an unsharp mask, or similar technique, you could use a variety, such as find edges or a multitude of others to create a mask which you then apply to a blurred version of one of your exposures. This would allow you to do create the day glow effect of local contrast, and others would let you do smoothing, or pull back in detail.

Additionally, you can use luminance values on other exposures to generate the matte for the sky, ie, when its blown and clipping in your highest bracket, you create a matte from it and apply that to your lowest bracket to pull back in the sky without having to necessarily rotoscope. You can choke, blur, or window that component to get a base image with everything in it, and then apply even more techniques like local contrast on top of the whole recombined image, or just to certain parts of the image.

The more I think about it, the more flexibility I think you really have; especially when using the full 5k image and delivering to 2k.

I'm not sure what the process used on this was, but to me its closer to what I was expecting when I originally saw the article: http://vimeo.com/32162661

February 22, 2013 at 5:44PM, Edited September 4, 11:21AM

0
Reply
MD

Obviously, you don't just have to blur your image, you can put detail back with sharpening, or other approaches to what you're applying through the unsharp or other mask.

February 22, 2013 at 5:45PM, Edited September 4, 11:21AM

0
Reply
MD

Thanks for elaborating MD -- I'm open to the fact that the whole look can be recreated faster in Resolve by someone more knowledgable than myself. Whether the exact HDR algorithm used by Photomatix can be recreated 100% with a combination of effects, or if it's just a close approximation, I would need to do my own testing. But I suppose it stands to reason that any processing done by Photomatix can be more or less achieved with a combination of filters and grading in Resolve.

I think the one thing we can definitely agree on is that if it can be done inside a single grading program with RAW file compatibility, such as Resolve, of course that's the ideal method. I would have benefitted greatly from this option because then I could have went back and done final color correct on the RAW file instead of using what ultimately became multi-generation processed ProRes files.

But as I said, I was never trying to proclaim I had come up with a definitive and fastest method of shooting HDR video, and if the articles and praise the video received contribute falsely to such an outlook, I can't really control that other than to say here and now that anyone planning to shoot something like this should plan on using logic and explore all methods before undertaking a project. Especially given the abnormally strong outpouring of negative sentiment that shooting in HDR for any type of project generally invites :) I do plan on posting an update on Vimeo itself soon responding to questions by others, so I will mention a little more about the alternative methods and encourage others to explore other routes, including yours MD.

I had actually seen the car test you linked to before MD. I think it's cool, but given my experience with tone-mapping it looks very similar to something that I would expect from a method similar to mine in terms of tone-mapping. Regardless of the workflow and time it took, I'm just saying I think a lot of the shots in my video also had minimal halo-ing (albeit more was visible, but I also had more sudden movement in frames) and a similar overall level of microcontrast and an overall "locally tone-mapped look". Specifically, notice how the shadows areas of the palm trees contrasting with the sky creates slight halos around them -- that's traditionally a tell sign of that type of processing. They seem to have crushed the shadow detail in certain shots quite a bit too (what would normally be done in Photomatix as black clipping but could also be easily achieved through grading as well), probably in a similar effort to hide the bouncy noise you get when pushing the shadows through that type of processing. But again, I'm not saying they did it in Photomatix, or even that they didn't do it using a simple overlay method in a color suite as you suggest is possible and faster. Just saying, holistically, the HDR characteristics of the image don't seem that different from mine and wouldn't lead me to believe they had a radically different style of processing. I definitely used a lot more grain and noise reduction (something I'm starting to wish I hadn't even done because a lot of the noise wasn't even that noticeable at 720p), and of course my color grading wasn't to your liking, but the actual filtration doesn't seem that different to me. But I guess that's neither really here nor there at this point.

Anyways, thanks for sharing and take care.

February 22, 2013 at 6:39PM, Edited September 4, 11:21AM

0
Reply
Colby Moore

MD - whats your Vimeo link? Just want to check out your work.

Cheers!

February 25, 2013 at 8:02PM, Edited September 4, 11:21AM

11
Reply
Jeremy

After I originally commented I appear to have clicked on the -Notify me when new comments are
added- checkbox and from now on every time a comment is added
I recieve 4 emails with the same comment. Is there a way you can remove me from that service?
Appreciate it!

April 11, 2013 at 1:20PM, Edited September 4, 11:21AM

2
Reply