Behind every great image taken by a camera phone there's a huge amount of electronic / optical / mechanical magic taking place. Camera users are normally oblivious to this magic since it happens so quietly and unobtrusively. Here we discuss the challenges of creating a good image from a CMOS sensor inside a camera phone.
View full size
How an Image is Created
In a film camera, light captured through an optical system strikes a piece of film, which is exposed and then developed using a chemical process. In a digital camera, that light still travels through an optical system comprising a multi-element lens, and a barrel, but now the light strikes a digital sensor array of rows and columns, made of millions of tiny picture elements or pixels. Figure 1 is a mechanical overview of a digital camera.
Fig. 1: Mechanical overview of a digital camera. The optics are virtually identical to those in a conventional film camera.
View Figure 1 full size
When light strikes the pixel array it goes through a color filter array that ensures that only blue, red or green light actually hits the appropriate pixel. At each pixel an analog signal is created, which goes through an ADC (analog to digital converter) to become a digital signal. This signal is then sent through what we will call the Image Pipe (or I-Pipe) that comprises a series of electronic filters that make the signal look like a real picture.
View full size
The I-Pipe adjusts the white balance, the color, and “reverses” certain anomalies introduced into the picture by the nature of the capture method. Examples include lens shadows, geometric distortion, reduced picture focus away from the center of the lens, and digital sensor noise. The Agilent I-Pipe also compresses the image using JPEG to create a small, accurate, compressed image that can quickly be written into a storage medium.
View this block diagram full size
Pre-Processing the Light
An absorptive or reflective infrared filter is used to pass the visible part of the spectrum and block infrared radiation above 780 nm. This ensures that the sensor focuses only on what the eye will see, and optimizes the integrity of the colors. If infrared light is not cut off in this manner it can cause blurriness and decrease the sharpness in the image formed by the lens.
A microlens is also used to pre-process the falling light so it is refracted appropriately into the pixel in as vertical a direction as possible. This microlens enhances optical sensitivity of the pixel and usually sits right above the color filter array.
Color Filter Array - Bayer Filter
Photodiodes are sensitive to brightness and not to colors. Therefore, some mechanism must be used to artificially make them sensitive to specific colors so those colors can eventually be represented to the human eye. A color filter array is used to ensure that each sensor pixel receives light of just one color: typically red, blue and green.
Fig. 2: The human eye is twice as sensitive to green as it is to red and blue. The Bayer color filter alternates a row of red and green filters with a row of blue and green filters with twice as many green pixels as blue and red combined.
View Figure 2 full size
There are different patterns that can be used for the color filter array. Because of the way the human eye perceives color and the fact that the human eye is twice as sensitive to green as it is to red and blue means that to emulate the human eye perception, the camera needs more green pixels. The Bayer pattern (Fig. 2) alternates a row of red and green filters with a row of blue and green filters with twice as many green pixels as blue and red combined. The raw output from the Bayer filter is a mosaic of blue, green and red pixels varying in intensity depending on the degree of light shone on a particular pixel.
continued on next page