During a briefing on Kodak's new high-end 50-megapixel image sensor -- costing a few thousand dollars each -- I finally got the answer to a question that has puzzled me ever since multi-megapixel imaging became commonplace over a decade ago. The question: Why do high-end pro video cameras continue to use three separate image sensors -- for red, green and blue -- while pro-quality digital still cameras (DSCs) use a single high-megapixel image sensor, such as the new "world's most megapixels" Kodak sensor (see
Kodak delivers next-generation 50-MP CCD image sensor.)
Kodak's new CCD image sensor boasts 50-megapixels
In the video world, the 3-chip imaging system is widely considered better for color accuracy, improved contrast, and more "film-like" images. It requires more optics however -- a prism to split the image three-ways.
The answer, explained Kodak, has to do with artifacts that are created by having red, green and blue-sensing pixels located right next to each other, rather than on top of each other, in the single-chip color image sensor. The processing circuitry must, in essence, create a pseudo red, green, and blue image by calculating in-between pixels. As the image sensor's resolution goes higher and higher, into the multi-megapixel and tens of megapixels zone, these artifact errors become slighter and slighter.
But for the comparatively coarse resolution of standard def video (720 x 480 = 350k pixels, or just 0.35 megapixels) -- and even HD video (which at 1920 x 1080 works out to 2-megapixels) -- the artifact errors are more significant. Hence, 3-chip imaging continues to rule for pro video, but not for pro still cameras.