Most video sources, including DVD, standard-definition TV, and 1080i high-definition TV, transmit interlaced images. Instead of transmitting each video frame in its entirety (what is called progressive scan), most video sources transmit only half of the image in each frame at any given time. This concept also applies to recording video images: video cameras and film-transfer devices record only half of the image in each frame at a time.
The words "interlaced" and "progressive" arise from the days of CRT or "picture-tube" televisions, which form the image of each frame on the screen by scanning an electron beam horizontally across the picture tube, starting at the top and working its way down to the bottom. Each horizontal line "drawn" by the beam includes the part of the picture that falls within the space occupied by that line. If the scanning is interlaced, the electron beam starts by drawing every other line (all the odd-numbered lines) for each frame; this set of lines is called the odd field. Then it resets back to the top of the screen and fills in the missing information, drawing all the even-numbered lines, which are collectively called the even field. Together, the odd and even fields form one complete frame of the video image.
Because all CRT televisions worked in this manner until very recently, the signal transmitted to them was designed to send the odd lines followed by the even lines. This matched the capabilities of the display device, and it also cut in half the amount of information that had to be sent in a given amount of time; in other words, it reduced the transmission bandwidth by a factor of two, which was good news for broadcasters. (Some modern CRT televisions are capable of scanning each complete frame from top to bottom in a single pass, and thus are said to perform progressive scanning.)
Today, high-definition video displays use digital technologies, such as DLP, LCD, LCOS (including variants SXRD and D-ILA), and plasma, which dominate the television landscape. Instead of "drawing" lines of picture information on the screen, these technologies form images with an array of pixels, and each frame is displayed in its entirety all at once; in other words, all pixels are activated simultaneously to form the complete image rather than forming the image line by line as CRTs do with scanning.
Even so, the video signal that determines what these devices will display is still interlaced or progressive; that is, the information is sent from the source either half a frame or one complete frame at a time. In fact, digital displays ultimately require a progressive signal to operate properly, so if they receive an interlaced signal, it must be converted to progressive before it can be displayed.
Thus, translating the interlaced video signal from DVD and 1080i sources into progressive format is required by all digital displays. This is the job of a video processor, and the process itself is called de-interlacing. Video processors are found in all digital displays as well as many DVD players and other source devices.
If the objects in the video image are not moving, it is very easy to do the de-interlacing – the two fields can be weaved together and combined to form a complete frame. However, if the recording is performed in an interlaced manner, the two source fields that make up a complete frame are not recorded at the same time. Each frame is recorded as an odd field from one point in time, and then as an even field recorded 1/50th or 1/60th of a second later.
So, if an object in the video has moved in that fraction of a second, simply combining fields causes the errors in the image called “combing” or “feathering” artifacts.
An example of feathering or combing
Simplest Competitor Approach (Non-Motion Adaptive):
The simplest approach to avoid these artifacts is to ignore the even fields. This is called a non-motion adaptive approach. In this method, when the two fields reach the processor, data from the even fields are completely ignored.
The video-processing circuitry recreates or “interpolates” the missing lines by averaging pixels from above and below. While there are no combing artifacts, image quality is compromised because half of the detail and resolution have been discarded.
1-Half of the fields are discarded, and we move in on a section of the frame.
2-The missing fields are now recreated by taking the field above and below and averaging them together, resulting in poor quality interpolation.
View full size
More-advanced techniques have been adopted by virtually all standard-definition video processors, but this basic approach is still sometimes used for high-definition signals, due to the increased computational and data-rate requirements of higher video resolution.
With video processors from some competitors, only 540 lines from a 1080i source are used to create the image that makes it to the screen. This is true even for video processors from companies that may have been considered providers of flagship performance in the standard-definition era.
Advanced Competitor Approach (Frame-based Motion Adaptive):
More advanced de-interlacing techniques available from the competition include a frame-based, motion-adaptive algorithm. By default, these video processors use the same technique described above. However, by using a simple motion calculation, the video processor can determine when no movement has occurred in the entire picture.
If nothing in the image is moving, the processor combines the two fields directly. With this method, still images can have the complete 1080 lines of vertical resolution, but as soon as there is any motion, half of the data is discarded and the resolution drops to 540 lines. So, while static test patterns look sharp, video does not.
Frame-based motion-adaptive techniques are now common in standard-definition video processors. However, this is still rare in high-definition video processors due to the computational complexity of even frame-level high-definition motion detection.
Silicon Optix HQV Approach (Pixel-Based Motion Adaptive):
HQV processing represents the most advanced de-interlacing technique available: a true pixel-based motion-adaptive approach. With HQV processing, motion is identified at the pixel level rather than the frame level. While it is mathematically impossible to avoid discarding pixels in motion during de-interlacing, HQV processing is careful to discard only the pixels that would cause combing artifacts. Everything else is displayed with full resolution.
Only the pixels that would cause combing are removed.
View full size
Pixel-based motion-adaptive de-interlacing avoids artifacts in moving objects and preserves full resolution of non-moving portions of the screen even if neighboring pixels are in motion.
“Second Stage” Diagonal Interpolation
To recover some of the detail lost in the areas in motion, HQV processing implements a multi-direction diagonal filter that reconstructs some of the lost data at the edges of moving objects, filtering out any “jaggies.” This operation is called “second-stage” diagonal interpolation because it’s performed after the deinterlacing, which is the first stage of processing. Since diagonal interpolation is independent of the de-interlacing process, competitors have used similar algorithms with their frame-based de-interlacing approaches.
View full size
Truth in Marketing
Silicon Optix is not the only company to implement pixel-based motion-adaptive de-interlacing, and it is important to recognize that all such de-interlacing is not identical. In order to implement a true per-pixel motion-adaptive deinterlacer, the video processor must perform a four-field analysis. In addition to the two fields being analyzed in the current frame, the two previous fields are required in order to determine which pixels are in motion. Clearly, if a competing de-interlacer does not evaluate four fields, it simply does not have the data necessary to perform true per-pixel motion-adaptive analysis. Some competing products implement region-based analysis, in which motion is determined by evaluating larger blocks of the image rather than complete frames or individual pixels. Obviously, then, a claim of “four-field” analysis alone does not imply per-pixel motion-adaptive de-interlacing.
HQV Processing continues to analyze at the per-pixel level using four-field analysis even in high-definition.
View full size
View full size
NEXT: Video Cadencing and Video/Film Detection, Noise Reduction, Detail Enhancement, 1024-tap Scaling