In digital imaging systems, the problem of camera calibration for a known illuminant can be represented as a discrete, three-dimensional vector function:
where F(x) is the mapping vector function and x_ is the discrete (typically 8-, 10- or 12-bit) vector of R,G,B principal color components. Based on whether you are going to perform mapping linearly and whether color components are corrected independently, the mapping function can be categorized as shown in Table 1.
Table 1 – Camera calibration methods
The von Kries hypothesis
The simplest, and most widely used method for camera calibration is based on the von Kries Hypothesis [1
], which aims to transform colors to the LMS color space, then performs correction using only three multipliers on a per-channel basis. The hypothesis rests on the assumption that color constancy in the human visual system can be achieved by individually adapting the gains of the three cone responses; the gains will depend on the sensory context, that is, the color history and surround. Cone responses from two radiant spectra, f1
, can be matched by an appropriate choice of diagonal adaptation matrices D1
such that D1
, where S is the cone sensitivity matrix. In the LMS (long-, medium-, short-wave sensitive cone-response space),
The advantage of this method is its relative simplicity and easy implementation with three parallel multipliers as part of either a digital image sensor or the image sensor pipeline (ISP):
In a practical implementation, instead of using the LMS space, the RGB color space is used to adjust channel gains such that one color, typically white, is represented by equal R,G,B values. However, adjusting the perceived cone responses or R,G,B values for one color does not guarantee that other colors are represented faithfully.
For any particular color component, the von Kries Hypothesis can only represent linear relationships between input and output. Assuming similar data representation (e.g. 8, 10 or 12 bits per component), unless k is 1.0, some of the output dynamic range is unused or some of the input values correspond to values that need to be clipped/clamped. Instead of multipliers, you can represent any function defining input/output mapping using small, component-based lookup tables. This way you can address sensor/display nonlinearity and gamma correction in one block. In an FPGA image-processing pipeline implementation, you can use the Xilinx Gamma Correction IP block to perform this operation.