In the beginning, Filo T. Farnsworth invented the television. One of his biggest problems, which still exists today, was that he wanted to send more data than was possible given the space (bandwidth) available. So instead he came up with the first video compression scheme: interlaced video. Since 1921, the video that we have watched has been interlaced and only recently have new, non-interlaced (progressive) formats become available. Despite the availability of progressive display technologies like plasma and LCD televisions, we continue to watch interlaced video source material, creating the need to de-interlace video.
Interlace was designed for the Cathode Ray Tube, or CRT. The TV screen is actually the end of this tube, which is painted on the inside with phosphor – a chemical that glows when it is hit by an electron beam. Near the front of the tube (the back of the TV), there is an ‘electron gun’ that sends a beam of electrons towards the screen. The electronics in the TV allow the gun to be aimed, which allows the entire front face of the tube to be ‘painted’ with the electron beam. This causes the front of the tube to glow, and makes the pictures we observe.
In order for the TV to paint the video picture, the image is broken down into a series of horizontal lines, which make up a single frame of video. One frame is a still picture on the TV. To generate a moving picture, the frames are continuously updated – usually about 60 times every second. To draw a higher definition picture, more horizontal lines are added. It is not this easy, however, because television transmission systems only allow so many lines per second to be transmitted. If we send more lines per frame, then we are not able to update the frames as often. And if the frame rate is too slow, then we see a flickering image.
When the original analog television standards were developed, there was a careful trade-off made between the number of lines per frame and the number of frames per second, given the available bandwidth (how much data can be sent per second). The original developers of television had very limited bandwidth, which resulted in a difficult choice between a picture with terrible resolution, or one with a lot of flicker. So they decided to cheat. They developed a system for the U.S. where the TV first drew a low-resolution picture using every other line of the frame, and then went back and filled in the missing lines. The result was an image with an acceptable resolution (525 horizontal lines) and a fast enough frame rate (60 frames per second) for the result to be of acceptable quality. This basic system, first introduced about 70 years ago in the U.S., is still what most people watch when they turn on the TV.
In the figure above, the television’s electron gun first draws the red lines. The dotted lines show the path that the CRT beam takes when it ends one line and goes back to the beginning of the next line (known as horizontal retrace). Once all the red lines have been drawn, the CRT beam goes back up to the top, along the black dotted line path, and draws in the green lines. The black dotted line is known as the vertical retrace. Each half of the total frame – the red half and the green half – is known as a field. The two fields are said to be interlaced.
Interlacing takes advantage of the latency in our visual system. When a line is drawn on the TV screen for a very short time, we continue to see it, even after it has actually faded. A sequence of images, 60 of them every second, seems to us to be continuous. By alternating the odd and even lines, we get twice the vertical resolution for the available bandwidth and we avoid visible flickering of the image. The video image is scanned on the CRT at the same time as it is received from the broadcaster, which means that there is no memory required in the system.
Next Page: De-interlacing and Gennum processing