By Adam Wilt
Editor’s Note: Special thanks to Adam for his work on this article. Also see Adam’s excellent article on DV Formats. For more articles by Adam and for information on his Engineering and Production Services, please visit Adam Wilt’s Web Site. To contact Adam by email go to the bottom of this page..
Slowly coming together… Random but useful tidbits on working with DV formats, proper Character Generation and Titling, and Betacam gotchas. More to come on an irregular schedule, including shooting for HDTV upconversion and tape-to-film transfer… also images illustrating the CG stuff.
|28 April 98||DV stuff, Betacam|
|8 July 98||Character Generators and Titling|
|28 July 98||Color bars on the Canon XL-1; DSR-30 unlocked audio corrected.|
- DVCAM tapes can be used in DV recorders. I used DVCAM tapes in my DHR-1000 when I couldn’t get DV tapes longer than 60 minutes in my neck of the woods. To suss out what tape you need, take the DVCAM running time and multiply by 1.5 (since DV uses a 10 micron track pitch to DVCAM’s 15 microns). For example, a PDV-94ME runs for about 141 minutes when recording DV.Recording DV unlocked audio via FireWire with the DSR-30 DVCAM VTR: Press and hold down RECORD and PAUSE while turning on the power. The deck will emit a sustained beep, at which point you can release the buttons. It will now accept unlocked audio. It’ll record 15-micron DVCAM still, but the audio will be unlocked. This trick is also supposed to work with the DSR-200 camcorder. Note: the DSR-20 will record unlocked audio without needing this trick.
Color bars on the DCR-VX1000 and DSR-200: Press and hold PHOTO and the START/STOP red button while turning the camera section on. You’ll get full-field bars which you can record, along with any incoming microphone audio. You’ll have to turn the camera off to get rid of them.
Color bars on the Canon XL-1: Turn on the camera to the fully auto mode (green square).Â Then press both the shutter speed buttons simultaneously for a few seconds, and full-field bars will appear. Pressing both buttons again for another few seconds turns them off (a definite improvement over the Sony).
Playing DV or DVCAM on DVCPRO VTRs: First, make sure your machine is up-to-date: VTRs made before June 1997 need an eprom upgrade to play DVCAM. Check the serial number: it’s of the form MYxxxxxxx, where M is a month letter, A-L, and Y is the last digit in the year. F7xxxxxxx means the machine was built in June 1997, and it’s OK. H6xxxxxxx would mean the machine was born in August of 1996 and the EPROM upgrade is required. Second — and this is very important — use the setup menus to specify DV or DVCAM before you insert the tape! The playback mode “locks in” when the tape is inserted, so if you set DV or DVCAM mode after loading the tape playback will still be attempted as if the tape were a DVCPRO tape, and you’ll get really crappy results.
Line inputs on the VX1000 (or other consumer cams)? You can fake it: first you’ll need a 3.5mm dual-mono-to-single-stereo Y Adapter, Radio Shack Cat. No. 274-375B; this’ll give you right and left channel mic inputs on separate mono jacks. Use an Attenuating Audio Cable with an RCA plug on the line end and a 3.5mm miniplug on the mic side: Radio Shack Cat. No. 42-2461A. Sony also makes a stereo pair version of this; if you just need one, peel the two cables apart. You can now jack into line-level audio sources. With a bit of luck, you can even plug a line source into one side, and a mono mic into the other — and the levels might even match up… If you follow this route, be sure to gaffer tape the cables out of the way so that they don’t get pulled or wiggled during the shoot, and avoid touching or moving the $%#@&! delicate miniplugs. Also, only use battery power when recording via these jacks lest any residual power-supply ripple or ground loops cause intolerable AC hum problems.
Using monophonic mics on cameras like the VX1000: the mic input on the VX1000 and most similar cameras is a standard 3.5mm stereo minijack. If you plug in a mono mic, you’ll only get one channel of audio. Use a mono-to-stereo adapter available from Radio Shack to fix this: part # 274-374. Or, use a 3.5mm dual-mono-to-single-stereo Y Adapter, Radio Shack Cat. No. 274-375B to give you separate mono right and left channel inputs for two mics.
- NEVER stuff a Betacam tape into a UVW-series VTR without checking to see if there’s already a tape inserted. The tape will go in just about all the way on top of the existing tape; it’ll then jam in place. UVWs do nothing to prevent this! Fortunately, unless you’ve really rammed it in enthusiastically, you can usually pry it out, and usually without any permanent damage occurring…
Character Generators & Titling: I spent almost four years writing software for the Abekas A72 in the early ’90’s; I learned a lot about what works and what doesn’t work when putting text on a video screen.
Doing good CG in interlaced video is trickier than you might think. Doing it right in compressed interlaced video (such as DV) is even harder. With interlace, you can have problems of line twitter on static or moving text, as well as roll-induced “crawlies” and distortions. With compression, you add problems of codec pathologies. And there are always pitfalls associated with colors and bandwidth.
Line twitter occurs when you have fine, single-line detail that appears in one field but not in the other. The detail so rendered will flicker at the frame rate, while the rest of the image updates at the field rate. The frame rate (29.97 Hz in 525-line NTSC; 25 Hz in 625-line PAL) is slow enough that your eye notices the flicker, and it’s very annoying.
First, make sure that the fonts you use don’t have fine, single-line horizontal strokes that will show up as line twitter. Llook at heavier fonts in the same family: instead of light or roman, look at bold, extrabold, or black variants. But bear in mind that instead of using Coronet or Script Light, you might have to switch to Helvetica or Times Bold. Often I find that the delicate typefaces I prefer in print work just won’t translate properly to video, and I have to find something else instead.
If you’re dead-set on using a fine, light typeface, try building a font that thickens up the strokes: use outline, soften, glow, or extrusion (solid drop shadow) effects in a similar color or brightness. Sometimes a soft drop shadow in a similar color with a minimal offset can be used — even Premiere 4.2’s limited text capabilities will give you that much!
The problem with interlace and rolls is that as the text moves up the screen, its position with respect to each field’s line structure can either change, or stay the same. If it stays the same, your text will look as good moving as it does when it’s still. If it changes, the text will lose resolution and may flicker, disort, and crawl around as it rolls.
Imagine characters in a screen font with a height of 10 lines. When placed on a page at its starting position, the even scanlines in the character all fall on the even video field, while the odd scanlines fall on the odd field. As the text sits there, the even fields show lines 2, 4, 6, 8, and 10 in the text, while the odd fields show lines 1, 3, 5, 7, and 9. All ten scanlines in the text are seen over the course of any two fields.
Now start a roll. Any decent CG updates the roll on a field basis; after one field is displayed, the text is moved up a certain amount for the next field, up the same amount for the next field, and so on.
Let’s say that the director wants a nice, slow roll, to kill some time. You’ve selected 60 lines/second (CGs that allow you to set the roll rate usually use scanlines per second as the measure, and in NTSC-land the 59.94 Hz field rate is rounded to 60 Hz to keep operators from getting bogged down in fractional math. If you’re in PAL-land, assume you’re rolling at 50 lines/second for this example), and pushed the “go” key on the CG.
Now in the first even field, character scanlines 2, 4, 6, 8, and 10 are shown. Next the text is moved up one scanline because at a nominal field rate of 60 Hz, one scanline per field results in 60 lines per second. So when the odd field is displayed, the text, being up one line from its even-field position, has lines 2, 4, 6, 8, and 10 displayed again — and lines 1, 3, 5, 7, and 9 don’t get put onscreen! The next vertical interval comes along, and the text is moved up one line again, so that the even scanlines once again appear in the even field. The next vertical interval arrives, and up goes the text again — one line! — so that the next odd field, like all the even and odd fields before it, shows the even scanlines of the text. The odd scanlines in the text never appear onscreen!
The result is half-vertical-resolution text that looks awful. Thin horizontal strokes will either appear about twice as thick as they should, or twice as thin, depending on your luck (sometimes you get the evens, sometimes the odd scanlines, depending on the CG you’re using, the initial position of the text, and the field timing when you press the “go” key).
Now go back and double the roll speed, to 120 lines/second (or 100 lines/second if using a 625/50 CG). Now, the first even field shows the even scanlines. Come the vertical interval, the text is moved up two lines (two lines per field times 60 fields per second gives 120 lines per second); the relative positioning of the text with respect to the field structure remains the same, since the even scanlines are moved up two lines to the next higher even scanline, while the odds are moved up to the next higher odd line. When the odd field is displayed, the odd scanlines in the text are shown, just as they should be! The full vertical resolution of the text remains.
You may have noticed a pattern here. When the roll rate (in lines/second) was the same as the field rate (in fields/second), the roll looked awful. When the roll rate was twice the field rate, things looked fine. As it turns out, this relationship holds for all integer multiples: roll rates that are odd multiples of the field rate look awful. Even multiples look great. Thus for 525/59.94 (or 525/60, more or less) video, good roll rates are 120, 240, 360, and the like. In 625/50, the good ones are 100, 200, 300, 400, and so on.
Unfortunately, in 525/59.94 the only two decent rates that are slow enough to be read are 120 and 240 (and the latter only on a good day!). 625/50 video is better — not only are the roll rates about 20% slower, there are almost 20% more active scanlines in a frame, so in 625 you can roll at 100, 200, and 300 lines/second without straining any eyeballs.
What about roll rates that aren’t integer multiples of the field rate? As you might guess, as the text moves, its positional relationship to the field structure no longer follows an integral structure, but changes on a field-by-field basis. This leads to two things:
1) Unless the CG you are using offers sub-pixel positioning, the roll won’t be able to execute smoothly, and the text will stutter or judder up the screen.
2) The roll motion will “beat” with the field structure, as the scanlines themselves appear to roll through the text (at a rate proportional to the difference between the roll rate and the nearest integral multiple of the field rate), causing time-dependent rippling distortions of the text (the “crawlies”) that look really horrible.
What real CGs do to avoid this is to offer only the “good” rates by default; extra work is required to set arbitrary roll rates. When you select timed rolls (the total time is set, rather than the speed), better CGs will fiddle things to wind up with a good rate. Some may just pick the rate that comes closest to meeting the desired time; some Chyrons run part of the roll at one rate, then “shift gears” to finish at a different rate to get the desired total duration; the A72 (and probably the Texus) “adaptively spaces” the text in the roll, stretching or compressing the vertical line spacing to allow a good roll rate to be used and still meet the time target.
What crummy CGs offer in the way of speed control is none at all. For example in Premiere 5.0, where they’ve finally understood that folks want to roll credits, there are no tools for setting or even reporting roll rates in the titler: one just has to adjust things by trial and error. This stinks: if you’re stuck with such a CG and need to produce for interlaced video, complain loudly to the vendor about their lousy non-video-aware tools, and then go look for a CG that does things right (are there any available for NLE? I’d check out Image North’s Inscriber CG and Pinnacle’s TypeDeko, both on the PC; and McRobert’s Comet CG on the Mac. These may offer proper controls… and there may be others out there as well. Let me know at the address below what you come up with and how well it works — or doesn’t.)
Codec Pathologies Most titlers in nonlinear editors render nice, sharp text. That’s just fine for display on a computer screen, but it’s too sharp for either bandwidth-limited broadcast or for compression: DV, M-JPEG, MPEG, or the like. This is true whether or not the text is antialiased; even antialiased text can have sharp vertical, horizontal, and diagonal transitions with significant energy in spatial frequency bands outside the codec’s comfort range.
Overly sharp text stresses the codec; in DCT-based codecs this results in “mosquito noise”, “critters”, or “feathering” artifacts that cause visual noise scattered around the immediate vicinity of the text. Moreover, the character of this noise varies depending on the relationship of the stressing image to the DCT block boundaries, which means that as your text moves (in a roll, crawl, or scroll) the mosquito noise surrounding the text will “fly around” just like a flock of hungry mosquitos. It’s very annoying.
The trick is to prefilter the text so as to avoid these problems. Running a simple soften or blur filter over the text will make it look a lot worse on the computer screen — but it will actually improve its appearance going through the codec. Not a lot of softening is necessary, but a little bit almost always helps.
Bandwidth — or more precisely, an excess thereof — has been a CG problem since day one. Many CGs (even high-end broadcast CGs) simply render text, antialiased or not, into their frame buffers, then encode the results as a video signal and send it out. Unfortunately, the sharp horizontal luminance transitions across characters can contain significant energy at frequencies above the passbands for NTSC or PAL systems (or even the circuits in unencoded, component analog VTRs and terminal equipment), which causes ringing and overshoot in the signal (I’ve measured 20% overshoot on the outputs of some top-end CGs).
For example, if you set the peak white in your text at 80 IRE (for NTSC, or 80% in PAL), the overshoot will hit almost 100 IRE or 100%. Many CGs offer a default white value in their palette of 80 IRE, because anything brighter causes an overshoot beyond 100 IRE or 100%: this results in “sync buzz”, that annoying 60 Hz (50 Hz for PAL; does PAL transmission have sync buzz problems?) buzz you’ll get in over-the-air audio when overmodulated picture carrier bleeds into the sound subcarrier. [Amusing note: the hardware folks who designed the Abekas A72 were very careful to bandwidth-limit the video outputs to avoid this ringing and overshoot. As a result, you can use a peak white of 100 on the A72 with no problems. Of course, we got lots of complaints from New York producers who really liked that hard-edged, ringing video quality they got out of their Chyrons; you can’t satisfy everyone…]
Color choices are also important when designing titles. Sure, just like you I’d love to use a deep, saturated red — but it usually doesn’t work out in practice.
First, the chroma bandwidth of all of our current production formats is at best 1/2 the luma bandwidth (in 4:2:2 digital video) and often only about 1/4 as good (4:1:1 or 4:2:0 DV; BetaSP, MII) or even worse (3/4″, S-VHS, Hi8, VHS, Video8). Since most text is comprised of fine detail, there’s not enough area for the color to fully express itself: what you’ll mostly get are noticeable color bleeds of the text onto the background and vice versa.
Second, any time you encode the video as NTSC or PAL for composite display or for analog transmission (which as of mid-1998 is how most of us are seeing TV, Digital Satellite Systems and DVDs aside [and even then many folks are connecting DSS and DVD via a single-wire composite feed]), you’ll get cross-color and cross-luminance artifacts at sharp transitions, especially when trying to use brightly-colored text. Often the dot crawl artifacts seen on such text will render it unreadable. What makes this worse is picking a text color that’s at the opposite side of the vectorscope from the background: for a real howler, try red text atop a cyan background or magenta text on a green ground, and view it on a composite monitor: ouch!
What can you do? Sad as it might seem, you have to back off the saturation and go for more muted pastel tones. Also, picking colors in the same wedge of the vectorscope (yellow on red, or cyan on blue, for example) causes fewer out-of-band excursions of the color subcarrier, and minimizes dot-crawl artifacts.
It’s critical to view your rendered text on a real, honest-to-goodness composite video monitor; the computer monitor (or even an RGB video monitor) showing a full-bandwidth uncorrupted RGB signal is no indication of what the text really looks like once it’s been put on video.
The one seeming exception I’ve found to these overall guidelines is rendering yellow text atop blue or cyan backgrounds (at least in NTSC; I’ve not tried this with PAL encoding). This violates all the rules: I’ll use fairly saturated colors for both text and ground, and they’re opposite one another on the vectorscope. For whatever reason, this doesn’t seem to cause as many problems as other, similarly pathological combinations. It could be that the yellow/blue change vector is fairly close to the B-Y axis, where humans’ spatial color sensitivity is near a minimum; it could be that the Y signal excursion is not as great, so less cross-color appears; it could just be a personal preference and associated aesthetic blindness!
Copyright (c) 1998, 1999 by Adam J. Wilt.
You are granted a nonexclusive right to duplicate, print, link to, frame, or otherwise repurpose this material,as long as all authorship, ownership and copyright information is preserved.
|Adam Wilt’s Home Page||Adam Wilt’s Engineering Services||Adam Wilt’s Film and Video Services||Top of Page|
Contact Adam via email
WARNING: YOU CAN’T JUST HIT “SEND”; IT WON’T WORK!Â IT’S NOT ALL FILLED IN AUTOMATICALLY! My email is “adam at adamwilt dot com“, but you’ll have to type the “adam” yourself.This is necessary to avoid having spammer’s webcrawlers snuffle my address and send me unwanted junk.Last updated 8 July 1999.
Get Ready To Produce Video For The NY Times You can produce video for the NY Times. If you can point to a short doc or samples on the web and can pitch an idea, you could produce an OP-Doc for the NY Times. Op-Doc is short for opinionated documentaries. They are typically 5 -10 […]Read More
Drone Case Settles This is a still from the video for which Pirker was fined. The full video is available in the link below. This case has finally been settled. In 2011 Raphael Pirker had been fined $10,000 by the FAA for using a Zephyr drone to capture aerial shots of the University of Virginia […]Read More