Most of us are familiar with the concept that we can mix red, green, and blue light to achieve a variety of colors as illustrated in Figure 1. Based on this, we all happily accept the idea that each picture element (pixel) on a TV or computer display is formed from red, green, and blue (RGB) sub-pixels. What most of us fail to question, however, is the quality of the resulting image. Just how good is it? Could it be better?
Let's take a step back and consider the CIE 1931 color space chromaticity diagram as illustrated in Figure 2. I'm sure you are familiar with this ... it pops up everywhere ... but what does it actually mean? To be perfectly honest I'm not sure as to the underlying nitty-gritty details, but it certainly is pretty and the fact that everyone is still using it says a lot (grin).
If we look at the Wikipedia Entry for this, we discover that: "The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers." This still doesn’t explain a lot, but at least we now have a clue as to what some of the numbers mean.
The main point is that if we have red, green, and blue light sources, then by varying their relative intensities we can achieve any color that falls within the boundary line linking them. Based on this, it would seem to make sense to use the "reddest red", the "greenest green", and the "bluest blue" as our light sources, thereby enabling us to achieve the greatest possible gamut of colors as illustrated in Figure 3.
Even if we could do this, it's obvious that a significant number of hues fall outside our triangle. In reality, however, the story is much worse than this. In the case of a typical liquid crystal display (LDC) for example, in which we typically use color filters to filter a white backlight, we end up with less-than-ideal red, green, and blue primary colors as illustrated in Figure 4.
In order to address this issue, several groups have been experimenting with the development of displays based on the use of six primary colors. For example, a team at NEC-Mitsubishi created a RRGGBB proof-of-concept pixels formed from two different red, green, and blue sub-pixels as illustrated in Figure 5.
Using this approach, the team says they can achieve a color gamut almost 170% greater than a standard RGB display. Of course there are almost always different ways to do things; as an alternative approach, a team at Samsung opted to use red, green, blue, cyan, magenta, and yellow (RGBCMY) sub-pixels as illustrated in Figure 6.
Figure 1. Mixing red, green, and blue light.
Figure 2. The CIE 1931 color space chromaticity diagram.
Figure 3. Using the "reddest red", the
"greenest green", and the "bluest blue".
Figure 4. Representative "real-world" red, green, and
blue primaries for a typical LCD display.
Figure 5. The gamut achieved using RRGGBB pixels.
Figure 6. The gamut achieved using RGBCMY pixels.
Now, before we all get excited, it should be noted that (a) to the best of my knowledge these systems are still at the "proof-of-concept" stage and (b) the above diagrams were created by myself to illustrate what I was waffling on about, so you can’t blame anyone else for any inaccuracies they may contain.
The initial proof of concept for both of these technologies was in LCD displays, but the underlying six-primary-pixel concepts are applicable to all forms of display.
In the short term, these technologies may be too expensive for deployment in consumer products, but I'm sure they would be of interest for a range of scientific and military applications.
And when we consider the way in which costs fall over time, it may not be too far in the future before we start to see displays of this type appearing in commercial and consumer applications.