The extended dynamic range, fast readout speed, system-on-chip integration and low power operation of Digital Pixel System (DPS) technology from Pixim
represents a substantial advance over existing image capture and processing technologies. Enhancing video image capture for many applications, DPS is particularly useful for security cameras that must provide detail over a wide range of illumination conditions.
Solid-state image sensor technology dates back to the invention of the first charge coupled device (CCD) in the late 1960s. The 1990s saw the introduction of the CMOS active pixel sensor (APS), followed by the development of the DPS platform. The invention of DPS technology grew out of more than eight years of research at Stanford University. Pixim subsequently commercialized DPS three years later.
Figure 1. Pixim D2500 'Orca' Chipset
Pixim's recently announced 'Orca' D2500 chipset provides all of the functions necessary to build advanced surveillance cameras and emerging imaging applications in two small Ball Grid Array devices.
Overview of Digital Pixel System Technology
The Digital Pixel Systems architecture and tightly-coupled imaging software provide superior image quality even under widely varying light conditions and wide dynamic range scenes that have both dark and bright areas. DPS provides dramatically better wide dynamic range images than charge-coupled device (CCDs) or CMOS active pixel sensors (APS) used in similar video camera applications.
The highly integrated two-chip set is comprised of a digital image sensor chip and a digital image processor chip. Layout of chip interconnections and user interface is straightforward. The user interface includes a customizable menu-driven on-screen display (OSD), switches and potentiometers, in any combination. Manufacturers define which of many settings are available for user control and which are fixed at the factory. Fixed settings can be locked so that they may not be identified or changed in the field, if desired.
Additionally, DPS technology offers:
DPS Image Capture and Processing
- Programmable exposure controls, configurable noise reduction, and greater dynamic range
- Digital video output and/or analog composite video outputs
- Reduced fixed pattern noise problems commonly associated with other sensors
- Digital pan / tilt / zoom, automatic white balance, and color correction, among other capabilities, using digital signal processing provided in the image processor chip
DPS technology converts the quantity of light striking each picture element (pixel) to a digital value at the earliest possible point: at the pixel itself. An analog-to-digital converter (ADC) is designed into each pixel
, and is operated simultaneously with all other ADCs in every pixel of the sensor. This pixel-level ADC architecture permits the use of many highly parallel low-speed circuits, operating close to where the photodiode signals are generated. This is key to optimizing the signal-to-noise ratio (SNR) for each pixel.
The DPS system uses the individual ADCs in each pixel to perform non-destructive correlated double sampling (CDS) at each pixel. DPS uses this capability to sample the growing light intensity at each pixel many times during each image capture period. This allows exposure level of each pixel to be determined by the rate of change of charge collected
rather than only its absolute magnitude. Each pixel is also provided with an adjustable offset cancellation gain amplifier to assure uniform response throughout the sensor array. These innovations greatly reduce noticeable fixed pattern noise problems commonly associated with the column-level ADC used on APS sensors.
Because DPS sensors are digital, pixel readout is much faster and more accurate. Each sample of the digital image is captured in on-chip RAM. The high bandwidth provided by tightly coupled local memory is used to achieve its superior high dynamic range. This approach is not practical for CCD or APS sensors because of their reliance on analog readout circuitry. This is not a problem with DPS, which greatly benefits from the digital sampling performed on each pixel.
Wide Dynamic Range
Wide dynamic range is essential for capturing image detail at all light levels. Today's surveillance cameras are plagued by dynamic range problems in a typical 24 hour day, due to severe reflections, glare, car headlights, and direct sunlight. The wide dynamic range achieved by DPS is realized by a patented non-destructive multi-sampling image capture capability, and advanced image-processing algorithms.
is the ratio of the brightest image that can be captured by the imaging system to the darkest image that can be captured. Light intensity greater than the brightest possible image will cause the sensor to saturate, while light intensity less than the darkest possible image will not register on the sensor. Both of these conditions distort the image, hiding potentially vital information that lies outside the dynamic range of the sensor.
When an exposure begins, each pixel is charged at a rate that is proportional to the intensity of the light that strikes it. A stronger light source will charge a pixel more quickly than a weaker light source. Existing analog technology typically uses a single exposure time for all pixels. At the end of the exposure, the camera will sense the total charge accumulated in each pixel. But that means some pixels (the brighter ones) may be overexposed while others (the darker ones) may be underexposed.
DPS overcomes this limitation as follows: with DPS, the light striking each pixel is sampled multiple times during the exposure period. DPS analyzes how quickly
each pixel is being charged by the light striking it. This way, DPS measures light intensity by a combination of the rate at which the charge grows as well as the total charge accumulated during an entire exposure.
Specifically, the DPS system records the length of time required to nearly saturate
each pixel. Pixels exposed to bright illumination will tend to saturate more quickly than other pixels. DPS determines for each pixel whether it will saturate before the next sample. If a pixel would saturate, then its elapsed exposure time is stored in memory, together with its current intensity of charge.
The advantage of this approach can be appreciated when one realizes that the entire range of each individual pixel, as well as the rate of change of the pixel charge, is used to form the resulting image, significantly increasing the dynamic range and SNR that is captured. Other technologies only measure the pixel value, not its rate of change.
DPS also provides improved color performance not available with other sensor technologies: the data recorded by each pixel is of very high quality, both in terms of accuracy and precision. High data quality allows the DPS image processing algorithms to render excellent fidelity for all colors and intensities. In surveillance applications, color accuracy is critical to forensic analysis of stored video once an incident has been recorded.
DPS provides a fast global electronic shutter to capture bright lights and produces images that do not exhibit rolling shutter artifacts common in APS sensors, or interlace artifacts common to CCD sensors. Since multi-sampling is fundamental to DPS and is included in the basic firmware, no programming is required by developers to achieve this level of quality.
Next Page: Improving S/N Ratio with Image Sensor's Integrated ADCs