Camera sources today are overwhelmingly based on either Charge-Coupled Device (CCD) or CMOS technology. Both of these technologies convert light into electrical signals, but they differ in how this conversion occurs.
In CCD devices, an array of millions of light-sensitive picture elements, or pixels, spans the surface of the sensor. After exposure to light, the accumulated charge over the entire CCD pixel array is read out at one end of the device and then digitized via an Analog Front End (AFE) chip or CCD processor. On the other hand, CMOS sensors directly digitize the exposure level at each pixel site.
In general, CCDs have the highest quality and lowest noise, but they are not power-efficient. CMOS sensors are easy to manufacture and have low power dissipation, but at reduced quality. Part of the reason for this is because the transistors at each pixel site tend to occlude light from reaching part of the pixel. However, CMOS has started giving CCD a run for its money in the quality arena, and increasing numbers of mid-tier camera sensors are now CMOS-based.
Regardless of their underlying technology, all pixels in the sensor array are sensitive to grayscale intensity -- from total darkness (black) to total brightness (white). The extent to which they're sensitive is known as their "bit depth." Therefore, 8-bit pixels can distinguish between 28, or 256, shades of gray, whereas 12-bit pixel values differentiate between 4096 shades. Layered over the entire pixel array is a color filter that segments each pixel into several color-sensitive "subpixels." This arrangement allows a measure of different color intensities at each pixel site. Thus, the color at each pixel location can be viewed as the sum of its red, green and blue channel light content, superimposed in an additive manner. The higher the bit depth, the more colors that can be generated in the RGB space. For example, 24-bit color (8 bits each of R,G,B) results in 224, or 16.7 million, discrete colors.
In order to properly represent a color image, a sensor needs 3 color samples -- most commonly, Red, Green and Blue -- for every pixel location. However, putting 3 separate sensors in every camera is not a financially tenable solution (although lately such technology is becoming more practical). What's more, as sensor resolutions increase into the 5-10 Megapixel range, it becomes apparent that some form of image compression is necessary to prevent the need to output 3 bytes (or worse yet, 3 12-bit words for higher-resolution sensors) for each pixel location.
Not to worry, because camera manufacturers have developed clever ways of reducing the number of color samples necessary. The most common approach is to use a Color Filter Array (CFA), which measures only a single color at any given pixel location. Then, the results can be interpolated by the image processor to appear as if 3 colors were measured at every location.
The most popular CFA in use today is the Bayer pattern, shown in Figure 1. This scheme, invented by Kodak, takes advantage of the fact that the human eye discerns differences in green-channel intensities more than red or blue changes. Therefore, in the Bayer color filter array, the Green subfilter occurs twice as often as either the Blue or Red subfilter. This results in an output format sometimes known as '4:2:2 RGB', where 4 Green values are sent for every 2 Red and Blue values.
Figure 1: Bayer pattern image sensor arrangement
NEXT: Connecting to Image Sensor, Image Pipe, Processing