Do Human Eyes 'See' Like Cameras? A Look at the Resolution & Frame Rate of Vision
Cameras are the “eyes” of cinema, recognizing, capturing, and processing images at certain frame rates and resolutions. But, what about our own eyes? At what “frame rate” do we process images and at what resolution? In these excellent videos, Michael Stevens, host of everybody’s favorite YouTube science channel, Vsauce, shows us how our eyes compare to cameras, not only in how well they “see”, but also in how they “record” images.
In case you haven’t heard, science is awesome, especially the science of cinema. We’ve talked before about how rolling shutter isn’t just something that annoys cinematographers, but is a natural distortion that affects the way we see our universe. This time (and again with a video from Vsauce), we take a scientific look at resolution and frame rates. At what resolution do we see the world with our eyes? Do we see in “frame rates”?
First of all, our eyes and brains process images differently than lenses and cameras. In this first video from Vsauce, which explains the nature of “video”, Stevens talks about the difference in how our eyes receive information and then communicate that information to our brain versus how lenses and cameras do it. A motion picture camera captures single still images that are later played back sequentially at a high enough frame rate that they appear to be moving, an effect called “beta movement”. But, the way our eyes work is very different. Stevens says:
Our eyes are not cameras. Instead, they track onto objects and receive a continuous flow of photons onto the retina, sending information via a chemical reaction to the brain.
The resolution of our eyes and that of cameras is also fundamentally different. When a camera captures an object, it captures it in its entirety to produce an image. Our eyes, again, don’t capture a single image, rather a flow of continuous images, but even more than that, we receive the most visual information from our central visual field (thanks to the fovea), and only there is “optimal color vision” and “20/20 acuity” possible. What does all of that mean? Well — it means that our vision is limited, but it is also, in a way, aggregated by our brain from different sources and with varying methods in order to make sense of the world around us.
Learning this stuff isn’t just good for a little entertainment — it can actually help you see (or even develop) what could be the next revolutionary technology in the cinematic world. Considering how technologically advanced cameras and lenses have become in just a short amount of time, I wouldn’t be surprised if we saw developers trying to emulate our eye’s system of image processing with something like — “non-frame rate recording” — “∞ fps” — “continuous photon something something capture”.
What do you think about the information from the Vsauce videos? Let us know in the comments below.