Temporal-noise-reduction-224x102On the left is a breakdown of the new iPad camera, and it has quite a few internal elements and an IR filter that should make for some quality 5 megapixel photos. While it doesn't have the wow-factor that the iPhone 4S's 8 megapixel camera does, it has something else that you may find interesting - and it was only hinted at during Apple's press conference introducing the new iPad. That something it called Temporal Noise Reduction. I'll explain what that means and why it's important for small sensor cameras.

Gizmodo does a fairly decent job exploring the meaning here:


Basically, it exploits the fact that with video there are two pools of data to use: each separate image, and the knowledge of how the frames change with time. Using that information, it's possible to create an algorithm that can work out which pixels have changed between frames. But it's also possible to work out which pixels are expected to change between frames. For instance, if a car's moving from left to right in a frame, software can soon work out that pixels to the right should change dramatically.

So basically it's an algorithm that interpolates information between frames and uses an educated guess to figure out where noise is and remove it. This doesn't sound too groundbreaking on the surface, but it's the first time it's been included in a major way on an Apple device, and it's likely that since Apple is obsessed with quality, it probably does a better job than your average point and shoot.

If it works like it should, it means that Apple has figured out how to better interpolate noise reduction - instead of smoothing noise over a large area (what most cameras do), which decreases sharpness a great deal - it looks like they've developed a complicated set of instructions to isolate and replace pixels sampled from less noisy pixels around it. Not only that, but it does this in real-time for every single frame and then estimates where it expects new pixels to be in succeeding frames so that it can sample them correctly.

Again, this is impressive if implemented correctly because it's more common to find these types of algorithms in post-production software that take a bit longer to remove noise. Neat Video and Magic Bullet both have similar programs that have temporal noise reducers. They examine the noise pattern by first analyzing the entire video, and then going back through and sampling pixels based on that pattern and the location of the noisy pixels. Supposedly the iPad is doing all of this in real-time to greatly reduce noise without affecting sharpness. That's really the key reason that noise reduction is hit-or-miss. For the most part, in-camera noise reduction greatly impacts sharpness because predicting where future noise will be is not that easy - so these processors try to sample a large enough area that they can be sure that most of the noise is removed - but in the process the image becomes muddy.

Now this isn't new technology, but it hasn't been implemented quite as impressively in small sensor cameras because they just simply don't have the processing power. The new iPad has a dual-core CPU and quad-core GPU. That's a lot of horsepower to do these types of calculations, and as processors get smaller and improve, you'll see this type of technology standard in all small sensor cameras. If pixel densities keep rising, and if the physical size of the sensor does not improve, then these noise reduction technologies must get much better - gapless mirrors and other hardware improvements can only go so far. If you can predict where noise will be, and do it efficiently, you can theoretically shoot in much lower light with far better results, because the video will remain sharp and only the pixels that are noisy will be sampled, instead of a large section of pixels.

There is a school of thought that noise reduction should stay in post-production, and for some cases that might be valid, but for less serious work, how many people are going to put their iPad or point and shoot footage through major noise reduction? The advantage that a hardware manufacturer has over software companies making de-noising products, is that they can better integrate the noise reduction into the entire workflow at the physical level. This means that it could reach a point where the hardware noise reduction does an equivalent job to a software noise reduction, and if the differences are minimal, it makes it a lot easier not having to take that extra step in post.

It remains to be seen how much of this technology will find its way into large sensor cameras, but the advantage is that with a bigger sensor, there is a lot more room to improve the physical aspects of light collection and make the sensors more efficient. It's certain that those large sensors will also reach a tipping point where they can't be greatly improved, and it will come down to oversampling or temporal noise reduction to shoot in lower light and improve sharpness.

[via Gizmodo]