There's always something new to discover. In the case of conventional LCD displays, for example, each pixel is formed from three RGB (red, green, blue) sub-pixels. A really cunning alternative is to use RGBW (red, green, blue, and white) sub-pixels as shown in the illustration below.
In the case of the PenTile technology from those clever folks at Clairvoyante (www.clairvoyante.com), each of these RGBW sub-pixels is 50 percent larger than their traditional counterparts. This means that you only get two columns of RGBW pixels for every four columns of traditional RGB pixels, which may appear to be something of a disadvantage.
Amazingly enough, however, Clairvoyante's special sub-pixel-based rendering algorithms result in bright, crisp images with natural color that have the same perceived resolution as with conventional displays. Furthermore, these RGBW displays provide twice the brightness of conventional displays for the same amount of power, or the same brightness while consuming half the amount of power. (You can read more about this in the Alternative Sub-Pixel Technology topic in my paper on the Origin and Evolution of Computer Displays.)
So there's definitely interesting things happening at the display end of the process, but what about the sensor end? Well, the sensors in the vast majority of today's digital cameras are based on what is known as the Bayer pattern, which is formed from red, green, and blue pixels and which is named after Kodak inventor Bryce Bayer. As is illustrated below, this pattern has twice the number of green pixels compared to the red and blue pixels (each 4 × 4 sub-array comprises 4 red pixels, 8 green pixels, and 4 blue pixels).
In this case, the brightness of the image at any given point is inferred from the green pixels, while color information is derived from all of the pixels. Software is used to reconstruct a full-color image using the red, green and blue contributions from each pixel combined with the brightness information provided by the green pixels.
Well, my friend Wilfried in the Netherlands just pointed me at a Hot Off The Press Article describing how the clever guys and gals at Eastman Kodak have developed an incredibly sensitive sensor based on – you guessed it – red, green, blue, and white pixels.
Actually, when we say "white" pixels, we actually mean that these pixels – which account for half of the pixels forming the sensor – are "panchromatic", which means that they are sensitive to all light frequencies (each 4 × 4 sub-array comprises 2 red pixels, 4 green pixels, 2 blue pixels, and 8 "white" pixels).
The result is an extremely sensitive sensor that can be used to capture images in low-light conditions with less noise. Alternatively, images can be shot at higher shutter speeds resulting in less camera shake and less blurred representations of moving objects (example images are provided in the article).
Well, this is very interesting, but now I'm left wondering one thing. As with the Bayer pattern, this new sensor has twice the number of green pixels as there are red and blue pixels. I'm guessing that this is because (as discussed in my paper on Color Vision) the human eye is more sensitive to green than to the other colors, but I'm not sure. For example, if this is the case, then couldn't the software simply boost the green component while reconstructing the image using equal numbers of red, green, and blue pixels? Hmmm, this is something I shall be pondering further. . .
Questions? Comments? Feel free to email me – Clive "Max" Maxfield – at email@example.com). And, of course, if you haven't already done so, don't forget to Sign Up for our weekly Programmable Logic DesignLine Newsletter.