We've relaunched as a full community! Get the scoop:

March 4, 2011

Does the Sony F3 Have 5X the Light Gathering Ability of the RED EPIC?

I'm working on an in-depth post comparing the Sony F3 with RED's forthcoming EPIC-S. While the F3 is shipping in limited quantities and the EPIC-S has only been announced, it's possible to compare the two because the EPIC-S shares its sensor with the just-shipping (and significantly more expensive) EPIC-M. In putting together the post, there's one section that I'm not sure of: the pixel pitch of each camera's sensor. Anyone out there want to help me with my math?

UPDATE: see some of the comments below, but it's being reported that the F3 in fact has a 3.4-3.6 megapixel sensor. And from further reading it seems engineers find that, while very small pixel pitches bring with them increased noise, once you reach a reasonable size -- 4-6 microns -- the benefits of larger pixel pitches in terms of noise are reduced. Thanks for the comments -- that's exactly the kind of info I was trying to suss out.

Everyone loves to argue about 4K versus 1080p, and the argument usually centers on whether 1080p is "good enough." I happen to think the answer -- for the next several years at least -- is yes. All things being equal, would I rather have a 4K instead of a 1.9K image? Of course. But all things are not equal, and there are a hundreds of other considerations besides resolution. In fact, there are some situations where 1080p has advantages. For the same reason the megapixel war is waning, there's a downside to packing more pixels into the same-sized sensor: less light sensitivity. And while the consumer megapixel war was always foolish simply because the sensors were so small -- normally in the neighborhood of 1/3" -- the same formula applies to large-sensor cameras. The smaller the pixel pitch, the less light a sensor can gather.

Before we go any further, a disclaimer: I'm sure some RED fans are going to hate all over this post, but I have nothing vested in either company: I'm equally interested in both cameras. I have posted a lot about RED, and I have posted a lot about Sony. If these two cameras were made by Kraft Foods and Yokohama Tires, I'd be writing the same thing. The point of this post is for me to throw some numbers out there and see if the math is right.

No matter how good RED's technicians are -- and RED's Graeme Nattress is a software engineer of the highest order, whose new noise reduction algorithm is apparently something to behold -- the RED is at a physical disadvantage compared to the F3 when it comes to low-light shooting. Why? Thanks to its large sensor and two MP (megapixel) sensor, the F3 is blessed with a pixel pitch that makes it optimal for sucking the most light out of a scene. Four times more than a Canon 5D, by some calculations. This got me to thinking: how much better in low light should the F3 be than the EPIC, simply from a mathematical standpoint?

The F3 has a 2 MP sensor; the EPIC-S has a 14 MP sensor. From a resolution standpoint, the EPIC-S owns the F3. But the larger a pixel is, the more light it can gather. Let's run our own calculation of pixel pitch:

  • With a sensor size of 23.6 X 13.3 mm and a resolution of 1920x1080, the F3 has a pixel pitch of 12 microns.
  • With a sensor size of 27.7mm x 14.6mm and a resolution of 5120 x 2700, the EPIC-S, EPIC-X, and EPIC-M have a pixel pitch of 5 microns.

When it comes to pixel pitch, bigger is better. Think of the pixels as if they were buckets, and light as if it were rain. If you've got larger buckets (pixels), you'll catch more rain (light). As such, the Sony has more than double the light gathering capability of the EPIC, right? 12 microns verses 5 microns? Actually, I think -- and here's where my math fails me -- that it should be five times as much. This is because pixel pitch is only a measurement of the width of pixels, and we're talking about pixels arranged in a two-dimensional grid -- buckets that are lined up in vertical as well as horizontal rows. The pixel pitch measurement is consistent for the vertical dimension of each sensor -- we're talking about sensors of the same aspect ratio -- so for two dimensions, we'd be talking about double the 2.4X ratio that is 12 microns to 5 microns, which would mean the Sony has pixels 4.8X the size of the RED's pixels. I ran the math a few different ways and always ended up in the ballpark of a factor of five, but given I'm a writer/director/shooter/blogger and not a camera engineer, please feel free to tell me I'm terribly wrong.

Of course, this is all just math. There's more to "light gathering ability" than the size of the pixels, but it is a primary consideration, and in my mind this is nothing to sneeze at. Still, the proof's in the pudding, right? While I'd mentioned previously that the F3 was reportedly terrific in low light, here's a video shot on the F3 solely with candles as lighting, at ISO 6400 (yes, 6400!):

Very, very impressive. I could do without some of the banding seen on her arm at 0:30 -- in fact, the camera's rolling-off of highlights (or lack thereof) is one of my early concerns -- but the camera's low-light capability, both mathematically and practically, is damn impressive so far.

So, about that math -- I know images much better than I know numbers, so please correct me if I'm wrong...

Your Comment

26 Comments

I think the math on the PMW-F3 sensor might be off. I don't think it's 1920 x 1080 within 23.6 X 13.3 mm.

At a Sony event Dr. Hugo Gaggioni CTO, VP Technology said it was a 3.43 Megapixels sensor, and that they aren’t disclosing the number of pixels across the vertical, and it has a “very interesting” color filter array.

http://notesonvideo.blogspot.com/2011/01/sony-pmw-f3-rule-boston-camera....

March 4, 2011

0
Reply

Can't you just throw an anamorphic lens on to your 5D and get a perfect 4K image? Or is that just with a GH2? I'm confused... Gotta stop going to other sites...

And yes, I'm kidding.

You're a brave man Koo getting in to this debate. You're rolling the dice on coming home and finding that a disgruntled RED user has left a rabbit boiling on your stove.

March 4, 2011

0
Reply
Neil

Wait, what do you have against rabbit stew?

Just kidding. I could be way off with this math, or my understanding of pixel pitch could be fundamentally flawed. That's why I'm posting this -- to suss out the wisdom of the crowd...

March 4, 2011

0
Reply
avatar
Ryan Koo
Founder
Writer/Director

Your complaint over banding and highlight roll-off in the test video are compression artifacts no? Could even be caused by the vimeo compression? or does the sensor play into that too?

I'm really interested in what you find out about this, it's something I've been wondering about a lot myself, but the math definitely keeps me from going as far as you have haha

March 4, 2011

0
Reply
MRH

There is so much more to pay attention at when it comes to sensitivity. In-camera noise reduction algorithms have become at least as important as having a good s/n on the sensor. Look at the last Canons XF series : they perform better in low light with their 1/3" chip than the Sony EX-1 which has a 1/2" sensor and the same pixel count.
The microlenses technology and the sensor's technology are by far much more relevant than a simple calculation based on the sensor's size and the pixel count.

Matter of fact, the F3 is rated at 320 ISO while the EPIC is, as you well know, rated at 800.

All depends on the noise on the image. Maybe the F3 @ 800 ISO has less noise than an EPIC ( but I doubt that. )
And remember, you cannot measure digital sensitivity on the same basis as the film sensitivity, but that's another story.

March 4, 2011

0
Reply
William

The F3 isn't rated at 320, it's rated at 800. And I say right there in the post, "There’s more to “light gathering ability” than the size of the pixels, but it is a primary consideration" ...

March 4, 2011

0
Reply
avatar
Ryan Koo
Founder
Writer/Director

It is true that lower res (larger pixels) gives higher sensitivity, greater dynamic range, lower noise and other benefits, all other variables being the same.

However, there is "physical" advantage that higher resolution sensors have over lower res sensors -- greater color depth.

All variables being the same, if we increase the resolution of an imaging system, we simultaneously increase the color depth. This principle is basic to all photography and imaging. This color depth boost happens even if the bit depth remains unchanged.

Bit depth and color depth are actually two different properties. Basically, color depth is a product of bit depth and resolution.

Given a 1920x1080, 10-bit-per-color-channel sensor, if we increase the resolution just four times (to approximately "4k"), we yield a 64-times increase in color depth.

Just a disclaimer: I am not a high res fanatic and I would sooner use an F3 or an Alexa or a Genesis, rather than use any Red product. I am just mentioning a little-known (yet significant) photographic principle that is relevant to resolution discussions.

There are more details on the relationship between resolution and color depth here: http://marks.org/color_depth/

March 4, 2011

0
Reply
Color Depth

I think you should incude the Alexa in this comparison also. A more expensive camera of course, but it has gained a lot of credibility from a few of the more traditional film makers.

Judging by the footage I've seen so far, it's the F3 that has impressed me the most. Even the cheap glass movie you posted the other day. There's just something about that look that really appeals to me. The colours especially. Red footage on the other hand has never really impressed me as much. When I first saw the tv show Misfits, it looked to me like they'd shot it using a 35mm adapter. The shallow depth of field was there, although it still looked like video. But no, it was shot on the Red One.

I think the main advantage you get from shooting on Red would have to be the higher frame rates. The additional resolution would be nice too. But for me, only for cropping parts of a shot. Like using a wide shot at 5k and cropping to a 2k close up. Or for funky zoom effects in post.

March 5, 2011

0
Reply

I don't agree on the initial premise: that of two sensors with the same physical size, one with more megapixels than the other, the one with less megapixels can gather more light

it cannot

it gathers exactly the same light: that which comes through the lens and hits that surface; minus whatever is lost in the spaces between microsensors, which with today's gapless microlenses, is basically zero

and I've carried out an experiment to back up this theory:

I compared the following two sensors:
* the 4.5 Mpix/cm2 sensor from a canon 500D (DSLR)
* the 41 Mpix/cm2 sensor from a casio z270 (point-and-shoot)
both cameras were released around the same date, so, if anything, the more expensive DSLR should have better technology than the point-and-shoot
still, I found that, while the point-and-shoot delivers much more noise per pixel, it delivers much less noise per sensor area
that is: if I could build an APS-C sensor full of microsensors like those in that point-and-shoot (and if that could work reasonably well in terms of heat and speed...), I'd have a 130 Mpix APS-C sensor that would be able to give me 15Mpix downsampled images with A LOT less noise than what I got from that 500D

this is not a universal rule, but it tells me that less megapixels doesn't mean less noise

noise depends much more on other factors than on pixel density

here are my results:
http://www.similaar.com/foto/mpix/mpix.html

March 5, 2011

0
Reply

which, by the way, doesn't mean I want 16K footage (unless for some intermediate steps in VFX-rich sequences), because my eyes can't even see 4K
http://www.similaar.com/foto/mpix/mpix2.html

March 5, 2011

0
Reply

and I'm not the only one finding this kind of results

you may want to visit this (very long) thread:
http://forums.dpreview.com/forums/readflat.asp?forum=1018&message=286074...

March 5, 2011

0
Reply

From dpBestflow: "Increasing the number of megapixels results in higher resolution but if the sensor size remains the same, it results in smaller photosites (pixels). Photosite size is referred to as the pixel pitch. Smaller photosites gather less light, so they have less signal strength. Less signal strength, all other things being equal, results in a less efficient signal to noise ratio, therefore more noise."

This is the the thinking that got me to write this post (and to make these calculations). However, according to luminous landscape: "sensors with photosites down to about 6.8 microns produce the highest quality, with little if any image quality degradation over ones with larger photo sites."

That quote is a bit dated but it does get to the heart of the matter, which is that sensors of considerable sizes such as these seem not to suffer from the "too tightly packed, too hot" issues that tiny (1/3", 1/2") chips suffer from. In fact, the EPIC's signal-to-noise ratio of 66dB beats the Sony's S/N of 63dB (both specs provided by the manufacturers; not verified).

I'll wait to update the article itself until someone can verify this, but if the Sony sensor is indeed 3.6 or 3.4 megapixels, it's likely using a non-standard bayer pattern (four colors instead of three), which is interesting in its own right. Interesting, that is, if you're a camera nerd, which anyone in this comments section obviously is.

March 5, 2011

0
Reply
avatar
Ryan Koo
Founder
Writer/Director

these tests tell me that it is "noise per pixel" which is increased, not "noise per sensor area": once you downsample your images to match the lower pixel density sensor, you get A LOT less noise, at least with these specific sensors; it doesn't have to be like that in all cases, but it's proof that you can go both ways

a requirement for this is that background electrical noise has to be very low, but it seems already in 2009 it was low enough for 130 megapixels to deliver less noise than 15 megapixels, on an APS-C sensor, and it's the other requisites that are holding the megapixel count back (sensor speed, heat, data throughput, etc)

that pixel density translates to photosites 1.6 microns apart from each other, which would mean a 4x improvement from 2006 (your link) to 2009 (my test); sounds plausible, but it's not even needed at all: I just checked that 130 mpix was better than 15 mpix, but maybe the optimal was 60 mpix, and for that you just need a 2x technological improvement in 3 years; I stand by my conclusions

on another topic: that non-standard bayer pattern looks great!!
I thought about this a while ago, and my approach would be to go for a 2x3 basic structure, with five colors (red, green, blue, white, gray) arranged in a pattern like (GWR,WBK) (where K means black means gray, and is designed to protect your highlights, while W means white means clear and is designed to protect your shadows and reduce noise in the luminance channel); it's just like applying a 4:2:2 color subsampling directly on the sensor, and the aim is to gain dynamic range and reduce luma noise, at the expense of color depth (as defined in another post by someone else on this comments section) and maybe chroma noise (which would be a problem)

March 6, 2011

0
Reply

I should state this: I'm not an expert, or an engineer, or work making cameras or sensors
these are just personal opinions based on some empirical tests, possibly flawed conclusions, and wild speculation

March 6, 2011

0
Reply

I've had a think about this and I'm wondering if the results you're seeing are more due to the benefits of downsampling than anything intrinsic to the sensors themselves.

My thought is that perhaps you're not getting less noise per sensor area, but that the averaging process is essentially helping eliminate it. Averaging will tend to smooth things out, and since noise is entirely statistical it seems to me that averaging over multiple pixels each with a random amount of noise will tend to decrease its appearance. It doesn't decrease the level of the noise, but it makes it more uniform in appearance. This is not the case with actual information, since that has structure.

Consider an image entirely made of noise, distributed in a gaussian shape between white and black with the peak at 50%. Averaging the entire image results in 50% grey on every pixel. This is the extreme case - the point is that the more pixels you average over the more uniform a contribution the noise will make. It's still there, but it doesn't look as much like noise any more.

I've just been thinking about this now, there may well be significant flaws in it but it's worth considering.

March 10, 2011

0
Reply
Luke

"Averaging will tend to smooth things out"

it is exactly that

for an APS-C sensor, you could have 15 milion of big photosites, or 130 million of much smaller ones, and then average the signal coming out of them to get a 15mpix image

what my test shows is that the second option can deliver less noise in the final 15mpix image than the first option

it doesn't need to be that way in all cases, but to me it means that photosite size is not the most important factor determining noise

March 11, 2011

0
Reply

Hi, some people seems to vastly misinterpret the basics:
In computer graphics, color depth or bit depth is the number of bits used to represent the color of a single pixel in a bitmapped image or video frame buffer. This concept is also known as bits per pixel (bpp), particularly when specified along with the number of bits used. Higher color depth gives a broader range of distinct colors.

Make it simple: color depth is the POSSIBLE number of colors that an image can retain - not the ACTULA number of colors, so it has nothing to do with resolution. A single pixel image with logarythmic 32 bit depth has more color depth that a Gigapixel image with one bit per color. Simply because You will approximate the colors of the image - it's not the same as if they are there.

is 10 bit's an advantage? Yes. But... It's the incoming bits that matter also - are we interpreting the light at 12,14,16 bits?

So does really F3 has cleaner image at basic level - I think YES. Some more arguments - SONY probably uses Backlit CMOS in it's camera - where the electronics produce less noise.

As for the noise - the problem is that when there is noise there is detail loss. Yes it is an advantage to have more pixels, but I rather prefer a large pixels with less noise in 2K image, than having a large noisy images.

Unless there is a head to head comparison - we can't tell. But I think SONY has an advantage in this area.

March 11, 2011

0
Reply
cmac

I think he's talking "color depth per full image" while you're talking "per pixel"

he's got a point, and so do you; maybe the name wasn't properly chosen, I'm no expert

about the head to head comparison: pvc posted one a couple of days back, but then it dissapeared; I don't know if it's back online again, but you can read their summary: the F3 smokes both the AF100 and the 5D
http://provideocoalition.com/index.php/freshdv/story/f3_vs_af100_vs_5d/
(and I still think this doesn't mean less pixels means less noise, there are other more important factors at work)

March 11, 2011

0
Reply

I'm not convinced by the idea of colour depth over the entire image. It's basically talking about supersampling to avoid aliasing, but talking about it in terms of colour which to me confuses the issue. And if I've misunderstood it then perhaps that at least makes the second half of my point for me! Obviously this, again, is a benefit of downsampling a larger resolution image but I don't think it requires the new (and confusing) term of 'colour depth'.

Furthermore I think the actual maths he does is even more confusing; colours per image as opposed to colours per pixel is IMO an almost totally useless number. Defining a quantity which is 64 times larger when you increase resolution by 4, but imparts to the average viewer almost no additional visual information, is confusing and inelegant. The quantity he defines as colour depth has basically no counterpart in physical reality; it's just a number which sounds impressive. It only makes sense to talk about colour depth in terms of differences between adjacent pixels, or areas of colour. And inevitably, downsampling a larger image helps in this regard. So why not just talk about it in terms of anti-aliasing or supersampling?

But this is all somewhat superfluous anyway; in reality a sensor with no downsampling, no line-skipping and no gaps between pixels IS 'downsampling' the real image incident upon it; as such (apart from noise) all these considerations are pretty much abstract.

To be totally honest, I'd have to talk to the writer of the colour depth article properly to be sure of my interpretation of what they're saying; I honestly found the article very confusing, whether that's my fault or the authors (although impact font really doesn't help! :p)

P.S. Samuel, I'm not trying to target you here! And I realise it wasn't even you who brought up the colour depth thing in the first place.

March 11, 2011

0
Reply
Luke

Hmm... reading the colour depth article again it would appear that the benefits he's discussing only hold if you display the final image at that higher resolution too; i.e. not downsampling. I think he's talking about the human eye combining two pixels into one - the eye is what does the downsampling, as it were, not the camera. If I've read that correctly, then this issue has nothing to do with this blog post anyway as we're talking about a higher res chip outputting 1080p.

March 11, 2011

0
Reply
Luke

I think his point is that you can make up for color depth (per pixel, as in the usual definition) by increasing resolution

I don't know if the correct term should be "color depth", but the idea has merit: ever saw a color picture on a newspaper? most of them are made with only 4 colors, and make up for that with resolution

as you say, it's an "in eye" subsampling that's working its magic there, but you could run it on a computer too, and deliver a very nice lower-res image with lots of colors (you need clever software for that, though)

P.S.: nevermind, I thought this was a very polite and edifying discussion
(ever been on a forum that's not populated by grown-up professionals? I was amazed to find out there was something different when I moved from "PC nerd forums" to "moviemaker forums"; people even dare to use their real names here sometimes, as if they were not planning to troll around day and night...)

March 11, 2011

0
Reply

"Contrary to conventional wisdom, higher resolution actually compensates for noise"
http://dxomark.com/index.php/Our-publications/DxOMark-Insights/More-pixe...!

March 11, 2011

0
Reply
Stephen S.

It looks like the URL wasn't interpreted properly. Highlight it and copy and paste it, as the "!" is a critical part of the URL.

March 11, 2011

0
Reply
Stephen S.

I don't know about all this techno-babble, but after everything has been said,
it's all about what appeals to our visual sense of the image. The F3 strikes a
near perfect balance between cost, technology and visual aesthetics. (The banding
was most likely caused by recording through the camera's native 8-Bit Codec).
All in all the F3 is bringing affordable Digital Cinema to the next level. Thank you Sony!
JP

March 20, 2011

0
Reply
John Philips

I don't know why anyone looks at these tests at all, you can't make a comparison with any of these cameras on Vimeo and be able to pass a fair opinion on anything. Hulu has banding in the dark areas.... what is hulu.... 2 bits? The best way to have a comparison test is to shoot everything... no grading, grab a raw frame from each scene and let everyone download it and be able to form a favorable estimate based on reality and not Hulu's good but horrid image!

Just my 31/2 cents in West Hollywood!!!

January 27, 2012

0
Reply
phillip

I've made many tryouts with F3 in low light. What i've found to be quite amazing - is the absence of noise...
By the way the FS100 is not too far from the F3 in this field... here are some samples http://www.youtube.com/watch?v=h1nDkBjFOvo

April 22, 2012

0
Reply