Doug Graham and Morris
What do people mean when they talk about “interlaced video”, and why is it important? Interlaced video is a technology that was developed in the early days of television, to fit a watchable picture into the available bandwidth of the time. It’s still with us, even in these days of high definition video, and understanding and dealing with its limitations is a frequent task for anyone interested in using video images, especially when moving them to different media, such as print.
Video images are produced by a narrow beam of electrons hitting a phosphorescent coating on the back of your CRT’s screen. (There are actually three colors of coating, and three electron guns, to produce the color image, but let’s simplify and just think of one gun…an old-style black and white TV).
If left to itself, the beam of electrons would make a little bright spot in the middle of your picture tube…not very useful. So instead, the beam is electronically steered from left to right across the tube, starting at the top. This turns our bright dot into a bright line. If we vary the strength of the beam as we scan across, our line can be brighter in one place, dimmer or nonexistent at another along its length. With me so far?
OK, now we whip the beam back to the left, turning it off while we do so. The technical term for this is horizontal blanking. We move it down a tiny smidge, and repeat the process. In this way, we build up a picture that consists of a big stack of narrow horizontal lines. At the bottom, we turn the beam off and whip it back to the top and start over again The technical term for this is vertical blanking.
What we’ve just described is a progressively scanned picture, with each scanning line immediately below the preceding one. The only problem with this is that with traditional TV, by the time you’ve finished the bottom line, the top lines are fading away. The phosphors, you see, only glow for a little while after being excited by the electrons. We could scan faster, but the bandwidth of standard TV won’t let us get away with that.
So instead, we scan line 1, skip a line, scan line 3, skip a line, etc. Then we whip back to line 2, line 4, etc, filling in the gaps. Each set of lines is a video field, and takes 1/60 of a second to create. Two fields, interlaced (there’s that word) together makes a frame, and there are just shy of 30 of ’em per second (25 for PAL).
Now, interlaced video helps to smooth out motion, especially slow motion, since we’re refreshing the picture (half of it, anyway) 60 times per second instead of 30. But let’s say that there’s motion in your image. The odd field will show the image in one location, and the even field will show it 1/60 of a second later, at a slightly different position. This results in an image that’s full of jaggies…a sort of horizontal “hairiness” to the edges of moving objects. If you don’t have a sample image handy, grab a deck of cards. Cut the cards and shuffle them together, but don’t even up the edges. Look at the deck end-on. That’s what a still made from interlaced video can look like.
What does this mean when you go to make a still image from video, for example to use on a DVD cover or a brochure? Let’s take an example image: a vertical finger moving from left to right. Let’s use only 4 scan lines to make it simple. Line 1 has the fingernail info.
The even scanlines are recorded first so parts of the finger are recorded on lines 2 and 4 as shown (even field). Then 1/60th of a second later, lines 1 and 3 are recorded (odd field). Because the finger was “moving” toward the right and the odd field was recorded 1/60th of a second later, parts of it are offset to the right as shown above. This is the problem with interlaced video. This is how a still will be captured from the video.
If the still is now displayed on a computer monitor, there will be a vibrating effect where it appears the finger is giving the “no, no, no” motion. If you print it, you’ll get the “deck of cards” effect we described above. To get rid of this effect, the still is brought into Photoshop and the deinterlace filter is applied. The deinterlace filter gives the user the choice of using only the even field or only the odd field.
Say the even field is selected. The deinterlace filter initially makes the still look like this:
Then it “invents” what it thinks lines 1 and 3 should look like by using the information on lines 2 and 4. With a vertical finger and only 4 scan lines, the fingernail on line 1 is missing but Photoshop does not know that so it creates line 1 using info from line 2. Line 1 appears to just be an extension of the finger with no fingernail. Line 3 is created using info from lines 2 and 4. The interpolated result looks like this:
Notice the fingernail is gone. This is a really crude representation since there are 480 scanlines to use, instead of our 4, but we think you get the idea. No matter how many scan lines are in the original video, half of all the information will always be lost when deinterlacing. With an HDTV format like 1080i, the problem is substantially lessened with more scan lines. In addition, some deinterlacing algorithms are smarter than others. But the interpolated result can never have as much “true” information as stills taken from progressive video where every scan line is used in the still with no deinterlacing required.
Speaking of TV formats: The three most popular formats (besides standard definition TV, sometimes referred to as “480i”) are 480p (the “p” means progressive scanning), sometimes called extended definition TV, or EDTV, 720p, and 1080i, with the latter two being considered variants of HDTV.
Produce Business Videos For Profit
Good article. How do you know whether to select even or odd scan lines with the Photoshop deinterlacing filter though. By experiment to see what looks better? (why would one look better than the other anyway) Or is there a rule?