Description image

Does 5DtoRGB Yield the Absolute Best Quality DSLR Footage?

09.3.10 @ 4:43PM Tags : , , , , ,

Months ago here on NoFilmSchool I tried to call attention to a little-known DSLR plugin in development known as 5DtoRGB. 5DtoRGB is a software plugin from Rarevision similar to Canon E1, MPEG Streamclip, and Magic Bullet Grinder in that it is designed to transcode your DSLR footage into something that’s eminently more editable. 5DtoRGB claims to offer the highest quality output of all of these options, but despite my posting about the plugin repeatedly, I could do no actual tests with it since my lowly laptop was restricted to 32-bit processing (5DtoRGB requires a 64-bit processor). Now that I’ve successfully built a 64-bit hackintosh, however (the how-to article is coming soon!), I was looking forward to putting the plugin to work. But I was beat to the punch by NoFilmSchool regular Robin Schmidt, who has done some great tests of his own, and as a result the word is out; now even 24 DP Rodney Charters is tweeting about 5DtoRGB. So now that we have our hands on the plugin, what’s the verdict?

For Final Cut-based editors, I’ll defer to Robin, given I’m transitioning to Premiere Pro CS5 because of its ability to edit DSLR footage without transcoding. From two great comparisons, the first entitled 5D To RGB – New Transcoding App Tested and the second 5D To RGB The Follow Up: Bigger Comparison, Robin draws the following conclusion:

It’s an epiphany… Holy crap, this thing is unbelievable! The first thing you notice is just how insanely smooth the footage is, it looks alive, preserved and ready to grade. In other words it is in a completely different league to the footage from MPEG streamclip. First, it’s lifted the gamma and given me a flatter image and more dynamic range. Compare the grabs below. It’s also had the unexpected benefit of fixing some (not all) of the moiré issues.

He’s not overselling it; you can clearly see the differences he speaks of in his example clip:

To my eye, he’s exactly right: the 5DtoRGB clip solves the Quicktime gamma issues I’ve harped on in the past, and does indeed also seem to offer a bit of moiré correction thanks to its chroma smoothing. It looks so much better than the native file that I found myself thinking, “they’re going to sell a boatload of copies of 5DtoRGB.” But then Robin’s second test came along, and the results are much less clear cut. I’d recommend heading over to read his full post, but when converting to ProRes 422 (a lossy codec that yields smaller files than the lossless 4444), Robin reacts:

To be honest with you, I don’t really know what I’m looking at here. Everything looks pretty much the same as everything else although I’d have to say there’s a marginal preference for the 5DtoRGB result. This is not what I expected. I fully expected the 5DtoRGB to wipe the floor with everything else. So what conclusions can we draw from this?… I have no idea but what I do know is that I will certainly be putting the time in to test a clip from my future projects through 5DtoRGB at 4444 to see whether it improves them for final mastering (I suspect it probably will).

Indeed, the workflow I had in mind before I actually had a chance to use 5DtoRGB was to edit the camera originals natively in Premiere Pro, and then, once picture is locked, go in with 5DtoRGB and only transcode the files that are actually being used in the edit. Sort of an online/offline workflow. I was planning on putting this to work next week, as we’re shooting a bit of a teaser in preparation for our participation in Independent Film Week’s Project Forum. But once I saw Robin’s dramatic results I decided to quickly hop outside and shoot a brief clip of some horizontal lines to see if 5DtoRGB offered any real moiré correction. What I found was surprising: compared to Robin’s stellar results in Final Cut, I can see very little difference whatsoever in Premiere Pro. Granted, this is not a staged or lit shot — I intentionally picked some vinyl siding to film handheld because the strong horizontal lines and texture of the wall would wreak havoc with the aliasing issues inherent to my 5D Mark II. The handheld shot ensures that the ugliness will stand out, and as you’ll see, it does. But the colors, gamma, and moiré are virtually identical in the two clips, to my eye. Be sure to click the full screen button, and watch the window air conditioner on the right; you’ll see it seems equally jagged in both versions:

Looks exactly the same, right? If you have a Vimeo account you can click through to the video and download a Quicktime for a better look. And despite the fact that this is re-encoded to h.264 web video, I did export the clip to ProRes 4444 and view it at 1:1 pixels — and I still couldn’t tell the difference. Still, while this is very obviously the quickest-possible preliminary test and I don’t want to draw any conclusions, it does beg the question: Is editing natively in Premiere Pro CS5 better than “up-transcoding?” Are there any advantages to transcoding, either before editing or as part of a finishing process? According to Adobe’s own Karl Soulé:

Inside Premiere Pro, the images will stay exactly as they were recorded in-camera for cuts-only edits. If there’s no color work going on, the 4:2:0 values remain untouched. If I need to do some color grading, Premiere Pro will, on-the-fly, upsample the footage to 4:4:4, and it does this very well, and in a lot of cases, in real-time. Going to a 4:4:4 intermediate codec does have some benefits – in the transcode process, upsampling every frame to 4:4:4 means that your CPU doesn’t have as much work to do, and may give you better performance on older systems, but there’s a huge time penalty in transcoding. And, it doesn’t get you any “better color” than going native. Whether you upsample prior to editing or do it on-the-fly in Premiere Pro, the color info was already lost in the camera.

At this point I’m thinking, “I’m convinced. Native editing in Premiere Pro offers the best quality, without transcoding through 5DtoRGB.” But taking a trick from Rarevision’s book, where they zoom in at 800% on their web site to illustrate the difference, lo and behold: there is a difference. Here I’ve zoomed into the red bike (red is a notoriously difficult color for video compression) at 500%. This image should animate between the two, watch the center of the image:

The smoother image is 5DtoRGB, and the blockier one is Premiere Pro (both were exported from Premiere Pro as uncompressed images). Given the color and gamma seem identical, I get the impression that Premiere Pro doesn’t exhibit some of the color inconsistencies that Final Cut does. In fact, the only advantage I can see is likely due to 5DtoRGB’s chroma smoothing. Will you ever notice this difference for web video? No. Would you notice it in a theater? Probably. More on this as I do more shooting and editing.

One final note on 5DtoRGB, and this is mostly for Rarevision, since the plugin is still in beta. Here is my CPU utilization while using the plugin to transcode a clip (listed to the right of the username, i.e. “Koo”):

The plugin didn’t use multiple cores, and it seemed to max out at 75% CPU utilization. By my rough calculations, with a quad-core processor, this means it used just 20% of the total CPU power available. I realize that it’s rare in practice that any application uses every processor core to its fullest — but as a point of comparison, exporting the clips via Premiere Pro to h.264 utilized 450% of my CPU. So whereas many folks are noting the relative slowness of 5DtoRGB compared to other transcoding solutions, I imagine there’s a lot of room for improvement.

If you’re a Final Cut-based editor and you need to use a transcoding application from the get-go, 5DtoRGB is a strong contender — especially once Rarevision gets batch processing working. For Premiere-based editors, it looks like the workflow I’d imagined before even trying 5DtoRGB will indeed be the best. Which is to edit natively in Premiere Pro — save the hard drive space and time you would’ve spent transcoding — and then if you’re concerned with the absolute best quality output, transcode with 5DtoRGB as part of the finishing process.


We’re all here for the same reason: to better ourselves as writers, directors, cinematographers, producers, photographers... whatever our creative pursuit. Criticism is valuable as long as it is constructive, but personal attacks are grounds for deletion; you don't have to agree with us to learn something. We’re all here to help each other, so thank you for adding to the conversation!

Description image 15 COMMENTS

  • I think comparing Premiere Pro and 5DtoRGB natively makes a lot of sense, but the real question is when you start doing heavy color correction, which program will actually have the better image quality? I think it’s possible that since the native h.264 footage is so noisy and aliased in the color channels, that 5DtoRGB may actually have better IQ in the end because you’re correcting footage that’s already been smoothed a bit in the color channels.

    • Joe — that’s exactly what I mean by “as part of the finishing process” — I would expect/hope that CCing 4444 footage would indeed offer better results.

      • > >I guess what I mean is that it’s what 5DtoRGB is doing to that 4:4:4:4 footage. Most intermediate codecs at 4:4:4:4 will give similar results, but it’s a question of whether bypassing QuickTime and the chroma smoothing that 5DtoRGB does will make the difference in the end. I’m going to do my own tests but I have a feeling that the Chroma Smoothing will help tremendously. That’s really one of the selling points for me, that it’s trying to smooth over the color channels at the most lossless way possible.

        I know that you more or less state what I said above, but I have a feeling that is what’s going to separate 5DtoRGB from any other workflow.

        What Karl from Adobe is saying is absoutely true. But the advantage to going to a lossless color system is that once you do being correcting or grading, you are working in a 10 bit system, which means that you ARE working with colors that never existed before – thus giving better and more subtle variations. The problem is that computer monitors are stuck in 8 bit, so you’ll need a TV or industrial monitor to see your true changes, since TVs and the like are actually capable of displaying the 10 bit color system. There’s also the conversion in color space from RGB to YUV in Final Cut, and then the fact that you’ll need that secondary TV monitor to see those changes and what they’ll look like away from computers.

        I guess I just wanted to make the point that if you’re staying on the Web, almost none of this makes that much of a difference. 4:4:4:4 doesn’t help a great deal when the 8 bit computer systems aren’t letting anyone see what you’ve really done color wise to the footage. Noise wise it will still help, but other than that unless you’re going to a television or projecting somewhere, the differences between codecs and compressions don’t make a heck of a lot of difference because unless you’re delivering almost uncompressed online, everything is being destroyed one way or another. It’s just frustrating that we have to
        deal with these color systems and codecs when in reality our footage is going to look much better projected or on a television, thanks to the way computers deal with color bit depths and color systems.

  • Koo,
    Thanks for staying on the quality trail here. In my own quick tests I’ve been most impressed with both the gamma and luma of Cineform 422–straight out of conversion from Cineform ReMaster, with the 5D box checked, 5D and T2i footage seems much improved. I have also found that with a little tweaking (bringing the toe out in Color), I can get 5DtoRGB-generated ProRes LT to look comparable to the Cineform codec, as far as showing detail in the shadows. I haven’t done any pixel-peeping or blowups or alias tests, but those are my general sentiments so far.

    Recap: 5D-to-RGB seems a definite improvement over Streamclip/Compressor for color gradation and Gamma, but requires a little Luma adjustment afterwards to have the range of a Cineform file, even when run on the “flatter” setting (1.8). Beyond that simple observation, I’m still green as to what’s best. (Another interesting note: Cineform seems to exhibit the same contrasty ProRes shadows as 5D-to-RGB and Streamclip and Compressor when it’s used to generate a ProRes 422 file. I would be curious to see how it fares when generating DPX or other flavors of ProRes, but have not yet tried the 4K edition that unlocks all of those conversion options. For creating it’s in-house codec, Cineform looks smashing right away. I’d like to try to mess with the Rarevision 5D-to-RGB luma settings, if a more elaborate settings panel were provided in the future, to see if it can create similarly subtle shadows straight out of conversion into a ProRes format.)

    Thanks again, Joe B.

  • Great post Ryan and I think that’s probably as far as we can go for now. This current batch of DSLRs are going to be around for quite a long time I suspect even if the new ones switch to a different acquisition setting. The search continues!

  • Jerome Stern has a MUCH more in-depth look at the Quicktime gamma issues an how 5DtoRGB solves the problem — at least initially. He finds that you then need to force FCP to render everything with a filter (even a dummy one) if you want consistent output — great find Jerome:

    • > Stern clears up a lot of potentially confusing gamma- and color-shifting mayhem with that, great find… I’m curious though, are such dummy filters required for exporting a master in FCP even if the footage has already gone through a render-pass via round-trip grading in Color? Is the nightmare actually so great that everything (again, after being rendered for finalized color-correction) must then be re-rendered again, just to circumvent apple’s horrible master-raping color/gamma worm holes? Or something?

      • Agh! It hurts just thinking about it. I honestly don’t know. That’s why I’ve been using Premiere CS5 lately…

        • > Honestly, given that the FCP family of software is supposed to be a professional-grade/alternative post-production suite primed for side-swiping hollywood, how is it possible apple allows the continuation (and proliferation, via the compounding confusion caused by the addition of the QT10 difference) of such a basic and egregious issue to exist in the first place?? AKA, how are we supposed to make a movie if we can’t trust what we’re seeing, or don’t know which of what we’re seeing to trust?!?

          INT. COMMENT – SEVERAL DEEP BREATHS LATER – I had to cut out numerous curses and vicious bits of sarcasm to correct for my willingness to self-indulge and just rant on this baffling non-sense. Apologies for reiterating a question already known to have no good answer.

          • You know, the funny this is, when I had the exact same thoughts, some people thought that I should quit my bitchin’. But yes, you’re absolutely right — this is a big problem and something that’s indefensible from a “Pro” platform.

          • I recall the post, never caught the comments, but after skimming them now…. I don’t see how this problem being ‘years old’ or ‘nothing new’ or anything of that sort makes it any less bitch-about-able. We really don’t need anything ELSE to make film-making MORE of a bitch, apple. Thanks for nothin.

  • OK, I too am trying out the Adobe Premiere work-flow BECAUSE of the native support for DSLR footage. I find this test very interesting and I wonder if we should not break this into two questions.

    1. is the color bit depth better in the trans-coded footage vs. the Adobe Native support.
    2. how good is the chroma smoothing.

    The reason I say this is that chroma smoothing is a “destructive” process that tries to smooth out the footage… it might also soften the image a bit. But, the real point is that you can do this later as well! Final cut has a Filter and so does After Effects (I assume there is a filter in Premiere but have not looked yet) you could also build something to smooth out the color channels in Color’s node FX room…

    So is it worth the Time and the Destructive process?

    My guess is that if you used a filter or built some method quickly in after effects and applied that to the adobe native work-flow you would see the exact same difference in the bike… and if you have a fast enough computer, RAM, Processor, CUDA? then you will likely see no loss of real time playback…

    all chroma smoothing is doing is applying a blur to only some of the color channels (in the correct color space etc) I can see no reason this needs to be “Baked” into the file…

    I have only really needed it when trying to pull keys so far but you are right it would likely need to be part of the process if the footage was going to the big screen…but it also seems to be that the smoothing could happen at any point in the process (and not need to be in the working files)

    Am I missing something?