112
A Guide to Standard and High-Definition Digital Video Measurements 3G, Dual Link and ANC Data Information Primer

A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

Embed Size (px)

Citation preview

Page 1: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video Measurements3G, Dual Link and ANC Data Information

Primer

Page 2: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video
Page 3: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

iwww.tektronix.com/video_audio

Table of ContentsIn The Beginning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

Traditional television . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

The “New” Digital Television . . . . . . . . . . . . . . . . . . . .2

Numbers describing an analog world . . . . . . . . . . . . . . . . . .2Component digital video . . . . . . . . . . . . . . . . . . . . . . . . . . . .2

Moving Forward from Analog to Digital . . . . . . . . . . . .3

The RGB component signal . . . . . . . . . . . . . . . . . . . . . . . . .3Gamma correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4Gamma correction is more than correction for CRT response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5Conversion of R'G'B' into luma and color difference . . . . . .5

The Digital Video Interface . . . . . . . . . . . . . . . . . . . . . .7

601 sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9The parallel digital interface . . . . . . . . . . . . . . . . . . . . . . . .11The serial digital interface (SDI) . . . . . . . . . . . . . . . . . . . . . .12High-definition video builds on standard definition principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

Timing and Synchronization . . . . . . . . . . . . . . . . . . . .17

Analog video timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17Horizontal timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18Vertical timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20Analog high-definition component video parameters . . . . .24

Digital Studio Scanning Formats . . . . . . . . . . . . . . . .25

Segmented frame production formats . . . . . . . . . . . . . . . .25

Digital Studio Synchronization and Timing . . . . . . . .27

Telecine synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . .30

Dual link and 3G . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31

Digital Audio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45

Embedded audio in component digital video . . . . . . . . . . .46Extended embedded audio . . . . . . . . . . . . . . . . . . . . . . . .47Systemizing AES/EBU audio . . . . . . . . . . . . . . . . . . . . . . .48Basic HD embedded audio . . . . . . . . . . . . . . . . . . . . . . . .49Audio control packet . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51How to monitor multi-channel audio . . . . . . . . . . . . . . . . . .52 Audio channels in 5.1 surround sound . . . . . . . . . . . . . . . .52The surround sound display . . . . . . . . . . . . . . . . . . . . . . . .53

Ancillary data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55

Video Measurements . . . . . . . . . . . . . . . . . . . . . . . . .61

Monitoring and measuring tools . . . . . . . . . . . . . . . . . . . . .61Monitoring digital and analog signal . . . . . . . . . . . . . . . . . .62Assessment of video signal degradation . . . . . . . . . . . . . . .62Video amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62Signal amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63Frequency response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65Group delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65Non-linear effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66Differential gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67Differential phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67

Digital System Testing . . . . . . . . . . . . . . . . . . . . . . . .67

Stress testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67Cable-length stress testing . . . . . . . . . . . . . . . . . . . . . . . . .67SDI check field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68In-service testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68Eye-pattern testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70Jitter testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72SDI status display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76Cable-length measurements . . . . . . . . . . . . . . . . . . . . . . . .76Timing between video sources . . . . . . . . . . . . . . . . . . . . . .77Intrachannel timing of component signals . . . . . . . . . . . . . .78Waveform method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78Timing using the Tektronix Lightning display . . . . . . . . . . . .78Bowtie method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79

Operating a Digital Television System . . . . . . . . . . . .81

RGB and color-difference waveforms . . . . . . . . . . . . . . . . .81Component gain balance . . . . . . . . . . . . . . . . . . . . . . . . . .81The vector display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81The Lightning display . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83The Diamond display . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84The Arrowhead display . . . . . . . . . . . . . . . . . . . . . . . . . . . .85

How to Monitor Gamut . . . . . . . . . . . . . . . . . . . . . . . .86

Definition of Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89

Page 4: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

Appendix A – Color and Colorimetry . . . . . . . . . . . . .90

White . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91Red, green, and blue components . . . . . . . . . . . . . . . . . . .91Gamut, legal, valid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94Format conversion tables . . . . . . . . . . . . . . . . . . . . . . . . . .96

Appendix B – Television Clock Relationships . . . . . .97

Appendix C – Standard Definition Analog Composite Video Parameters . . . . . . . . . . . . . . . . . .98

Appendix D – Reference Standards and Practices for Television . . . . . . . . . . . . . . . . . . . . . . . .99

Appendix E – Bibliography . . . . . . . . . . . . . . . . . . . .100

Appendix F – Glossary . . . . . . . . . . . . . . . . . . . . . . .101

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107About the authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107

ii www.tektronix.com/video_audio

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

Page 5: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

1www.tektronix.com/video_audio

In The BeginningIt is tempting to think of digital television as something very scientific and even complex. But when we view the end result, we find something very familiar; something television engineershave sought since the very beginning… an experience that justkeeps getting better and better… quality video and audio conveying the artist’s performance to the viewing audience. The only thing new in digital television is the way the message gets from here to there.

Does it really matter how the message travels? The artist andthe viewer (and in many countries, the advertiser) probably don’tcare what path the signal takes. They can benefit from digital television’s improved performance without knowing the details. Ah, but the science…. that’s where the fun comes in. Those of us involved in the technical side of television do care; and we dobenefit from the significant advances in television science over the past 60+ years… and in particular the advances brought about by digital television over the past 20 years.

Program video, digital audio, and associated ancillary data signalstogether make up the digital television signal. In the analog world of television, video and audio can exist in totally separate pathsfrom source to the home television receiver. Digital signals may beorganized with much more freedom, with video, audio, and othersignals working together as a stream of data. All we need to knowis how the data is organized to pick out what we want.

Traditional televisionWe can call analog video and analog audio the elements of traditional television. But it is important to realize we are still tryingto accomplish the traditional goals… and maybe more. Digital television builds on analog, and our understanding of digital

television builds on what we already know about analog television.Light into the camera lens and sound into the microphone, are still analog. Light from the display and sound to your ears are still analog phenomena.

We already know that analog video is a “sampling” of light values.Values of brightness represented by a voltage. And additional information provides the color of the samples. The samples aresynchronized through the transmission system to reproduce animage of the original scene on our display. Analog video travels as a “serial” stream of voltage values containing all of the “data”necessary to make a picture when the receiver knows what to do with the information. So you can see that by just substituting a few words, and by just doing a few things differently to takeadvantage of what we have learned over the past fifty years, we can understand that digital video is really not very different than analog video.

So if we start with analog light and end with analog light, why use digital video at all? In many cases, the camera sensor is still producing analog video, but it is now common to almost immediately convert the varying analog voltage representing theinstantaneous value of video to digital for handling with essentiallyno degradation. In some cases, such as computer-generated videoor graphics, the video will start out in digital format, and with thenew digital television systems, it can reach the display never goingto analog.

We can still send and receive television signals via analog NTSC,PAL, or SECAM transmissions, but we are already using digitaltransmissions to convey higher quality, more efficient television signals to the home. Digital television is an available part of everyday life. Some of us will use it and contribute to its improvement. Some of us will take advantage of it withoutneeding to know the details.

Page 6: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

2 www.tektronix.com/video_audio

The “New” Digital TelevisionDigital signals have been a part of television for many years, at first buried inside equipment such as test signal and character generators; later throughout entire systems. In this primer, we willdeal first with the video portion of the television signal for simplicity.Audio will be digital as well, and will take its place in the digital datastream for recovery at the television receiver. Digital audio will bediscussed in later chapters.

Digital video is a simple extension of analog video. Once we understand analog video, it is easy to understand how digital videois created, handled, processed, and converted to and from analog.Analog and digital video have many of the same constraints, andmany of the problems that may occur in the digital domain are aresult of incorrect analog source video. Therefore, it is important tohave standards to reference for the design and operation of bothanalog and digital video devices.

Numbers describing an analog worldEarly digital video was merely a digital description of the analogNTSC or PAL composite analog video signal. Standards were written to describe operating limits and specify the number data describing each voltage level and how each number was generated and recovered. Because of the high speed of the data, it was common to handle digital video data internally on an eight-or ten-bit bus, and initial standards described a multi-wire externalconnection as well. The standards also described certain ancillaryand housekeeping data to synchronize the receiver and the transported data, and to permit additional services such as embedded audio. Later, as higher processing speeds becamepractical, a single wire composite serial interface standard was developed. In its basic form, digital video is a numeric representation of analog voltage, with number data occurring fast enough to accommodate changing video and necessary ancillary data.

Component digital videoThe designers of early analog special effects equipment recognizedthe advantage of keeping the red, green, and blue video channelsseparate as much as possible during any processing. The NTSCand PAL encoding/decoding process is not transparent and multiplegenerations of encoding and decoding progressively degrade thesignal. The signal in the camera starts out with independent channels of red, green, and blue information, and it is best to handle these signals through the system with as few format generations as possible before encoding them into NTSC or PAL for transmission to the home. But handling three separatecoordinated channels of information through the television plantpresents logistic and reliability problems. From a practical standpoint, these three signals should all coexist on one wire, or commonly a single coaxial cable. As it turns out, we can simply matrix these three components, the red, green, and bluevideo channels, to a more efficient set consisting of luma and twocolor-difference signals; digitize each of them, and multiplex thedata onto a single coaxial cable. We can handle this data signalmuch as we do traditional NTSC or PAL composite video. Now weare handling a high-speed stream of numeric data. Although thisdata signal contains energy changing at a much faster rate than the 5 to 6 MHz energy in an NTSC or PAL video signal, it can behandled losslessly and with less maintenance over reasonable distances. Once the video signal is in the digital domain, we caneasily extract its components for individual processing and recombine them again in the digital domain without any further loss or interaction among the channels.

Component and digital techniques contribute significant advan-tages in video quality control, and the speed of digital devices hasmade the bandwidth of high-definition video practical. Digital alsolends itself to processing with various compression algorithms toreduce the total amount of data needed. It is now possible to convey high-definition video and associated multichannel audio in the bandwidth required for high-quality real-time analog video. The subject of video compression is covered in many publications(see Bibliography) and will not be addressed in this primer.

Page 7: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

3www.tektronix.com/video_audio

Moving Forward from Analog to DigitalThe digital data stream can be easily broken down into its separatecomponents, often serving the same function as their analog counterparts. We will continue with this analogy as we describeand compare the analog and digital video domains. Once we clearly understand the similarity between analog and digital videowe can move to HDTV, which is often a digital representation of the corresponding high-definition analog format.

NTSC and PAL video signals are composites of the three camerachannels, the primary color components red, green, and blue,matrixed together to form a luminance channel summed with themodulation products of a suppressed subcarrier containing twochannels of color information. A third system of single-channelcomposite transmission is the SECAM system, which uses a pair of frequency-modulated subcarriers to convey chroma information.In the studio, there is no specific requirement that the signal beNTSC, PAL, or SECAM at any point between the camera RGBpickup devices and the RGB channels of the final display device.While an understanding of NTSC, PAL, or SECAM is useful, weneed not invest in any new study of composite video.

The RGB component signalA video camera splits the light of the image into three primary colors – red, green, and blue. Sensors in the camera convert theseindividual monochrome images into separate electrical signals.Synchronization information is added to the signals to identify theleft edge of the picture and the top of the picture. Information tosynchronize the display with the camera may be added to thegreen channel or occasionally added to all three channels, or routed separately.

The simplest hookup, as shown in Figure 1, is direct R, G, and B, out of the camera, into the picture monitor. The multi-wire transmission system is the same for analog standard or analoghigh-definition video. A multi-wire connection might be used insmall, permanently configured sub-systems.

This method produces a high-quality image from camera to display,but carrying the signals as three separate channels, involves theengineer to ensure each channel processes the signals with thesame overall gain, direct curent (dc) offset, time delay, and frequencyresponse. A gain inequality or dc offset error between the channelswill produce subtle changes in the color of the final display. Thesystem could also suffer from timing errors, which could be produced from different lengths of cable or different methods ofrouting each signal from camera to display. This would producetiming offset between the channels producing a softening or blurring in the picture – and in severe cases multiple, separatedimages. A difference in frequency response between channelswould cause transient effects as the channels were recombined.Clearly, there is a need to handle the three channels as one.

Figure 1. RGB from the camera with direct connections to the monitor.

Page 8: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

4 www.tektronix.com/video_audio

Insertion of an NTSC or PAL encoder and decoder (Figure 2) does nothing for simplicity except make the signal easier to handle on one wire within the television plant. System bandwidth is compromised in a friendly way to contain the energy of the three video signals in 4.2 MHz (NTSC) or 5.0 to 5.5 MHz (PAL). The single-wire configuration makes video routing easier, but frequency response and timing must be considered over longerpaths. Because both chroma and luma in the NTSC or PAL composite signal share the 4.2 MHz, 5.0 or 5.5 MHz, multiple generations of encoding and decoding must be avoided.

By substituting component digital encoders and decoders, thehookup (Figure 3) is no more complex and is better in perform-ance. Energy in the single coaxial cable is now at a data rate of270 Mb/s for standard definition signals; 1.485 Gb/s or higher forhigh-definition signals. Standard definition signals could be converted to analog NTSC or PAL for transmission within traditional broadcasttelevision channels. High-definition signals must be compressed foron-air transmission within the channel bandwidth of existing NTSCor PAL channels.

Gamma correctionAn analog factor to be considered in the handling of the video signal is the perception that the video display is accurately reproducing the brightness of each element of the scene. TheCathode Ray Tube (CRT) display is an inherently non-linear deviceand therefore, the amount of light output is a non-linear function of the voltage applied to the display. This function is called thegamma of the device. In order to produce a linear response, a correction factor must be applied within the TV System. Therefore,the RGB signals in the camera are gamma-corrected with theinverse function of the CRT. Gamma-corrected signals are denotedR', G', and B'; the prime mark (') indicating a correction factor hasbeen applied to compensate for the transfer characteristics of thepickup and display devices. Although the prime mark may appeara bit cumbersome, and is sometimes incorrectly omitted, it will be used throughout this primer for correlation with standardsdocuments.

New LCD and Plasma display technologies are becoming moreprevalent today, so one would think that gamma correction wouldnot be needed in the future. However, the human visual responseto luminance is also a power function; approximate intensity raisedto the 1/3 power. For best contrast representation and signal tonoise (S/N), video encoding uses this same power function. This is called conceptual coding.

Figure 3. Digital transmission avoids analog signal degradation.

Figure 2. Video encoded to NTSC or PAL for transmission on a single coaxial cable.

Page 9: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

Figure 4. BT.709 gamma correction complements CRT display response.

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

5www.tektronix.com/video_audio

Gamma correction is more than correction forCRT response The gamma correction needed for the CRT is almost optimal forconceptual correction. For this reason, care should be takenwhen evaluating systems where correction factors have beenapplied within the devices for gamma correction.

Figure 4 shows the gamma correction as a power function of 0.45 as specified in ITU-R BT.709, a predominant standard fordigital high-definition video. This gamma correction is applied atthe camera to correct for nonlinearities at the CRT and provideconceptual coding. Nonlinearities in the CRT exist as a powerfunction between 2.2 to 2.6, and most CRTs have a value ofabout 2.5. The resulting total system gamma is about 1.2, whichis nearly ideal for typical viewing conditions. This response roughlycorrects for human lightness perception, which in turn reducesthe number of bits required when the video signal is digitized fortransmission.

Conversion of R'G'B' into luma and color-differenceVideo components red, green, and blue are native to the camerapickup devices and are almost always used by operators in managing video color. RGB, however, is not the most bandwidth-efficient method of conveying the image during video processingbecause all three components must be equal bandwidth. Humanvision is more sensitive to changes in luminance detail than tochanges in color, so we can improve bandwidth efficiency byderiving full bandwidth luma information and allot any remainingavailable bandwidth to color-difference information.

Processing of the video signal components into luma and color-difference values reduces the amount of information that must be conveyed. By having one full bandwidth luma channel (Y') represent the brightness and detail of the signal, the two color-difference channels (R'-Y' and B'-Y') can be limited to about halfthe luma channel bandwidth and still provide sufficient color information. This allows for a simple linear matrix to convertbetween R'G'B' and Y', R'-Y', B'-Y'. Bandwidth limiting of thecolor-difference channels is done after the matrix. When the channels are restored to R'G'B' for display, brightness detail is

Page 10: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

R'G'B' for display, brightness detail is restored at full bandwidthand spatial color detail is limited in an acceptable manner. The following paragraphs and tables discuss the conversion process for R'G'B' to Y', R'-Y', B'-Y' that takes place within encoders and decoders.

Gamma-corrected R'G'B' components are matrixed to creategamma-corrected component luma, designated Y', and two color-difference components. The luma and color-difference components are derived from R', G' and B' to the values shown in Table 1 (the unit of each coefficient is in volts).

Table 1 shows the range of voltages for the conversion of R'G'B' to Y', (R'-Y'), (B'-Y'). The luma signal has a dynamic range of 0 to700 mv. The color-difference signals, R'-Y' and B'-Y', may have different dynamic ranges dependent on the scaling factors for conversion to various component formats. The analog componentformat denoted by Y'P'bP'r is scaled so that both color-differencevalues have a dynamic range of ±350 mv. This allows for simplerprocessing of the video signals. Analog Y'P'bP'r values are offsetto produce Y'C'bC'r values typically used within the digital standards. The resulting video components are a Y’ or luma channel similar to a monochrome video signal, and two color-difference channels, C'b and C'r that convey chroma informationwith no brightness information, all suitably scaled for quantizationinto digital data.

A number of other color-difference formats are in use for variousapplications. In particular it is important to know that the coeffi-cients currently in use for composite PAL, SECAM, and NTSCencoding are different, as shown in Table 2.

Approximate value Component (SMPTE 170M and ITU-R BT.470-6)

Y 0.299 R' + 0.587 G' + 0.114 B'

NTSC I –0.2680 (B' – Y') + 0.7358 (R' – Y')

NTSC Q +0.4127 (B' – Y') + 0.4778 (R' – Y')

PAL U 0.493 (B' – Y')

PAL V 0.877 (R' – Y')

SECAM Dr –1.902 (R' – Y')

SECAM Db 1.505 (B' – Y')

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

6 www.tektronix.com/video_audio

Y', R'-Y', B'-Y' commonly used for analog encoding

Format 1125/60/2:1, 720/60/1:1 525/59.94/2:1, 625/50/2:1, 1250/50/2:1

Y’ 0.2126 R’ + 0.7152 G' + 0.0722 B' 0.299 R’ + 0.587 G' + 0.114 B'

R'-Y' 0.7874 R' – 0.7152 G' – 0.0722 B' 0.701 R' – 0.587 G' – 0.114 B'

B'-Y' –0.2126 R’ – 0.7152 G' + 0.9278 B' –0.299R' – 0.587 G' + 0.886 B'

Y', P'b, P'r analog component

Format 1125/60/2:1 1920x1080 (SMPTE 274M) 525/59.94/2:1, 625/50/2:1, 1250/50/2:1(SMPTE 240M) 1280x720 (SMPTE 296M)

Y' 0.212R' + 0.701G' + 0.087B' 0.2126R' + 0.7152G' + 0.0722B' 0.299R' + 0.587G' + 0.114B'

P'b (B'-Y') / 1.826 [0.5 /(1 – 0.0722)] (B'-Y') 0.564 (B'-Y')

P'r (R'-Y') / 1.576 [0.5 /(1 – 0.2126)] (R'-Y') 0.713 (R'-Y')

Y', C'b, C'r, scaled and offset for digital quantization

Format 1920x1080 (SMPTE 274M) 525/59.94/2:1, 625/50/2:1, 1250/50/2:11280x720 (SMPTE 296M)

Y' 0.2126 R' + 0.7152 G' + 0.0722 B' 0.299 R' + 0.587 G' + 0.114 B'

C'b 0.5389 (B'-Y') + 350 mV 0.564 (B'-Y') + 350 mV

C'r 0.6350 (R'-Y') + 350 mV 0.713 (R'-Y') + 350 mV

Table 1. Luma and Chroma Video Components.

Table 2. Luma and Chroma Values for Composite Video Encoding.

Page 11: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

7www.tektronix.com/video_audio

The Digital Video InterfaceA quick overview of the digital interface connecting our analogworld of video is appropriate at this point. The block diagrams inFigures 5 through 8 can help you understand how video productionequipment handles digital component video signals. Although theseblock diagrams illustrate a standard definition system, the conceptholds for high-definition formats. In high-definition formats, sampling and data rates will be faster and separate 10-bit busses for luma and chroma may be maintained further throughthe system to minimize the amount of circuitry operating at high data rates.

Gamma-corrected RGB (Figure 5) is converted in a linear matrix toa luma component, Y', and two scaled chroma components, P'band P'r. Since the eye is more sensitive to changes in brightness(detail) than to changes in hue, the Y' signal will be carried throughthe system at a higher bandwidth (5.5 MHz in standard definition).The luma and chroma signals are low-pass filtered to eliminatehigher video frequencies that might cause aliasing in the sampling(digitizing) process. The filtered luma signal is sampled at a rate of13.5 MHz in an analog-to-digital converter to produce a 10-bit datastream at 13.5 Mb/s. The two chroma channels are filtered, thensampled at 6.75 MHz in analog-to-digital converters to producetwo data streams at 6.75 Mb/s. The three video channels are multiplexed to a single 10-bit parallel data stream at 27 Mb/s.

Figure 5. Digitizing RGB camera video.

Page 12: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

8 www.tektronix.com/video_audio

Figure 6. Processing and serializing the parallel data stream.

Figure 7. SDI Receiver – deserializes the video data to parallel.

27 Mb/s

27 Mb/s 27 Mb/s

A co-processor (Figure 6) is used to add timing reference signals,AES/EBU formatted digital audio, and other ancillary data. Achecksum is calculated for the data and added to the paralleldata stream.

The 27 Mb/s, 10-bit parallel data is then loaded into a shift register, or serializer, where it is clocked out at a 270 Mb/s rateand scrambled for efficient transmission compliant with, in thisexample, standard definition ITU-R.BT-656/SMPTE 259M.

Standard definition ITU-R.BT-656/SMPTE 259M compliant signalscan be carried by standard video cables up to about 300 meters(approximately 1,000 feet) with near 100% data integrity. high-definition SMPTE 292M compliant signals at a data rate of 1.485Gb/s are limited to about 100 meters (approximately 300 feet).

At the receiver (Figure 7), energy at half-clock frequency is sensedto apply an appropriate analog equalization to the incoming 270 Mb/s data signal. A new 270 MHz clock is recovered fromthe NRZI (Non-Return to Zero Inverse) signal edges, and theequalized signal is sampled to determine its logic state. The

Page 13: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

9www.tektronix.com/video_audio

deserializer unscrambles the data using an algorithm complementaryto the encoder’s scrambling algorithm and outputs a 10-bit datastream at 27 Mb/s. The embedded checksum is extracted by thereceiver and compared with a new checksum produced from thereceived data and any error is reported and an appropriate flagadded to the data stream. A co-processor extracts any audio orother ancillary data.

The 10-bit data is then demultiplexed (Figure 8) into digital lumaand chroma data streams, converted to analog by three digital-to-analog converters, filtered to reconstruct the discrete data levelsback to smooth analog waveforms, and matrixed back to the original R'G'B' for display.

This quick system overview will help us understand how the systemoperates. Additional details of the digital interface are provided inthe paragraphs to follow.

601 samplingITU-R BT.601 is the sampling standard that evolved out of a jointSMPTE/EBU task force to determine the parameters for digitalcomponent video for the 625/50 and 525/60 television systems.This work culminated in a series of tests sponsored by SMPTE in 1981, and resulted in the well-known CCIR Recommendation 601 (now known as ITU-R BT.601). This document specifies thesampling mechanism to be used for both 525 and 625 line signals. It specifies orthogonal sampling at 13.5 MHz for analogluminance and 6.75 MHz for the two analog color-difference signals. The sample values are digital luma Y' and digital color-difference C'b and C'r, which are scaled versions of the analoggamma-corrected B'-Y' and R'-Y'. 13.5 MHz was selected as the sampling frequency because the sub-multiple 2.25 MHz is a factor common to both the 525 and 625 line systems (seeAppendix B – Television Clock Relationships).

27

Figure 8. Recovering analog R'G'B' from parallel data.

Page 14: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

10 www.tektronix.com/video_audio

Figure 9. Color-difference quantizing.

-397.7 mV 003 003 00 0000 0011 Excluded -400.0 mV 000 000 00 0000 0000 0000 0000 00 00

Excluded 399.2 mV 1023 3FF 11 1111 1111 1111 1111 FF 255 396.9 mV 1020 3FC 11 1111 1100

396.1 mV 1019 3FB 11 1111 1011 1111 1110 FE 254Highest Quantized Level

350.0 mV 960 3C0 11 1100 0000 1111 0000 F0 240Positive

Voltage Decimal Hex 10-bit Binary

-396.9 mV 004 004 00 0000 0100 0000 0001 01 01Lowest Quantized Level

-350.0 mV 064 040 00 0100 0000 0001 0000 10 16Negative

Black 0.0 mV 512 200 10 0000 0000 1000 0000 80 128

8-bit Binary Hex Decimal

reserved values

reserved values

Figure 10. Luminance quantizing.

Excluded 766.3 mV 1023 3FF 11 1111 1111 1111 1111 FF 255 763.9 mV 1020 3FC 11 1111 1100

763.13 mV 1019 3FB 11 1111 1011 1111 1110 FE 254Highest Quantized Level

700.0 mV 940 3AC 11 1010 1100 1110 1011 EB 235Peak

Voltage Decimal Hex 10-bit Binary 8-bit Binary Hex Decimal

Black 0.0 mV 64 040 00 0100 0000 0001 0000 10 16

-47.9 mV 4 004 00 0000 0100 0000 0001 01 01Lowest Quantized Level

-48.7 mV 3 003 00 0000 0011 Excluded -51.1 mV 0 000 00 0000 0000 0000 0000 00 00

reserved values

reserved values

Although many current implementations ofITU-R BT.601 use 10-bit sampling, ITU-RBT.601 permits either 8-bit samples (corre-sponding to a range of 256 levels, 00hthrough FFh), or 10-bit samples (correspon-ding to a range of 1024 levels, 000hthrough 3FFh). Specified 8-bit word valuesmay be directly converted to 10-bit values,and 10-bit values may be rounded to 8-bitvalues for interoperability. Color-differenceC'b and C'r components values in therange 040h to 3C0h (Figure 9) correspondto analog signals between ±350 mV. Signalexcursions are allowed outside this rangeand the total available range is nominally±400 mV. Luma component values, Y'(Figure 10) in the range 040h to 3AChcorrespond to analog signals between 0.0 mV and 700 mV. Signal excursions areagain allowed outside this range with a totalrange of nominally –48 mV to +763 mV toallow greater headroom for overload abovethe white level. A/D converters are config-ured to never generate 10-bit levels 000hthrough 003h, and 3FCh through 3FFh topermit interoperability with 8-bit systems.Quantizing levels are selected so 8-bit levels with two “0s” added will have thesame values as 10-bit levels. In both luminance and color-difference A/Ds, values 000h to 003h and 3FCh to 3FFhare reserved for synchronizing purposes.

Page 15: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

11www.tektronix.com/video_audio

Figure 11. Digital horizontal blanking interval.

Figure 11 shows the location of samples and digital words withrespect to an analog horizontal line and Figure 12 shows the spatialrelationship to the picture area. Because the timing information iscarried by End of Active Video (EAV) and Start of Active Video(SAV) packets, there is no need for conventional synchronizing signals. The horizontal blanking interval and the entire line periodsduring the vertical blanking interval can be used to carry audio orother ancillary data. The EAV and SAV timing packets are identifiedin the data stream by a header starting with the words: 3FFh, 000h,000h. The fourth word (xyz) in the EAV and SAV packets containsinformation about the signal. Ancillary data packets in componentdigital video are identified by a header starting with the words:000h, 3FFh, 3FFh.

The “xyz” word is a 10-bit word with the two least significant bitsset to zero to survive an 8-bit signal path. Contained within thestandard definition “xyz” word are functions F, V, and H, which havethe following values:

Bit 8 – (F-bit) 0 for field one and 1 for field two

Bit 7 – (V-bit) 1 in vertical blanking interval; 0 during active video lines

Bit 6 – (H-bit) 1 indicates the EAV sequence; 0 indicates the SAV sequence

The parallel digital interfaceElectrical interfaces for the data produced by Rec.601 samplingwere standardized separately by SMPTE as SMPTE standard125M for 525/59.94 and by EBU Tech. 3267 for 625/50 formats.Both of these were adopted by CCIR (now ITU) and included inRecommendation 656, the document describing the parallel hardware interface. The parallel interface uses eleven twisted pairsand 25-pin “D” connectors. The parallel interface multiplexes datawords in the sequence C'b, Y', C'r, Y'… resulting in a data rate of27 Mb/s. Timing sequences SAV and EAV were added to each line.The digital active video line for both 525 and 625 formats includes720 luma samples, with remaining data samples during analogblanking available for timing and other data.

Because of the requirement for multiple conductor cables and-patching panels, parallel connection of digital studio equipment is practical only for small, permanently configured installations.

Page 16: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

12 www.tektronix.com/video_audio

The serial digital interface (SDI)Regardless of format, there is a clear needfor data transmission over a single coaxialcable. This is not simply because the datarate is relatively high, but also, if the signalwere transmitted without modification, reliable recovery would be difficult. The signalmust be modified prior to transmission toensure that there are sufficient edges forreliable clock recovery, to minimize the low frequency content of the transmitted signal,and to spread the energy spectrum so thatRF emission problems are minimized. Aserial digital interface that uses scramblingand conversion to NRZI was developed to meet these needs. This serial interface isdefined in ANSI/SMPTE 259M, ITU-RBT.656, and EBU Tech. 3267, for both standard definition component andcomposite signals including embeddeddigital audio. A scaled version of this serialinterface is specified for high-definitiontransmission.

Conceptually, the serial digital interface is much like a carrier system for studio applications. Baseband video and audiosignals are digitized and combined on theserial digital “carrier” as shown in Figure 13.Note, this is not strictly a carrier system in that it is a baseband digital signal and not asignal modulated on a carrier. The bit rate(carrier frequency) is determined by theclock rate of the digital data, 270 Mb/s for standard definition component digitaland 1.485 Gb/s (or 2.97 Gb/s) for high-definition formats. (Other rates, including143 Mb/s and 177 Mb/s for NTSC and PAL composite serial interfaces are alsoused but will not be covered in detail in this primer.)

Figure 12. Layout of 2:1 interlaced digital frame.

Figure 13. The carrier concept.

Page 17: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

Parallel data representing the samples of the analog signal components is processedas shown in Figure 14 to create the serialdigital data stream. The parallel clock isused to load sample data into a shift register, and a 10x multiple of the parallelclock shifts the bits out, LSB first, for each10-bit data word. If only 8 bits of data areavailable, the serializer places zeros in thetwo LSBs to complete the 10-bit word. In component formats, the EAV and SAV timing signals on the parallel interface provide unique sequences that can be identified in the serial domain to permit word framing. Coding of EAV and SAV data packets are described in the DigitalStudio Synchronization and Timing sectionof this primer. If other ancillary data such as audio has been inserted into the parallelsignal, this data will also be carried by theserial interface.

Following serialization of the parallel information, the data stream is scrambled by a mathematical algorithm, then encodedinto NRZI by a concatenation of the followingtwo functions:

G1(X) = X 9 + X 4 + 1

G2(X) = X + 1

Scrambling the signal makes it statistically likely to have a low dc content for easier handling and have a great number of transitions for easier clock recovery. NRZI formatting makes the signal polarity-insensitive.

At the receiver, the inverse of this algorithm is used in the deserializer to recover the correct data so the end user sees the origi-nal, unscrambled components. In the serialdigital transmission system, the clock iscontained in the data as opposed to the parallel system where there is a separate clock line. By scrambling the data, an abundance of transitions is assured as required for clock recovery. For system stress testing (see Digital System Testing section), specific test signals have been developed that introduce sequences with high dc content and minimum transitions to test the effective-ness of the SDI receiver circuitry. A normally operating serial digital system will not fail even when stressed by these difficult signals.

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

13www.tektronix.com/video_audio

Figure 14. Parallel-to-serial conversion.

Figure 15. NRZ and NRZI relationship.

Page 18: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

14 www.tektronix.com/video_audio

Encoding into NRZI makes the serial data stream polarity insensitive. NRZ (Non-Return to Zero) is the familiar logic level, high = “1”, low = “0”. For a transmission system it is convenient not to require a certain polarity of signal at the receiver. As shownin Figure 15, a data transition is used to represent each data “1”and there is no transition for a data “0”. The result is that it is onlynecessary to detect transitions; either polarity of the signal may beused. Another result of NRZI encoding is that a signal of all “1”snow produces a transition every clock interval and results in asquare wave at one-half the clock frequency. However, “0”s produce no transition, which leads to the need for scrambling. At the receiver, the rising edge of a square wave at the clock frequency would be used for data detection.

The serial digital interface may be used over moderate distances in a well-designed system with normal 75-ohm video cables, connectors, and patch panels. As an example, the effects of an unterminated cable, such as may be found on a T-connector,may be unnoticeable with analog video but will cause substantial reflections and potential program loss with serial digital video.

This discussion of component video in the parallel and serialdomain is generally applicable to both standard definition and high-definition scanning formats. Sampling and quantization levelsare generally the same, as is the formatting of synchronizing information. Sampling rates are higher, and there are generallymore samples available for ancillary data in high-definition formats.Line numbering and error-check words are present in high-definition formats, and there are more samples available for multi-channel audio. The principles, however, are the same for standardand high-definition formats. Understanding one component digitalformat puts us well on our way to understanding all of the others. This primer will point out differences as the discussion continues.Digital standard and high-definition video scanning formats are discussed and compared in the Timing and Synchronization section of this primer.

High-definition video builds on standard definition principlesIn transitioning to digital high-definition we can use the basic principles learned for standard definition and apply them to thespecific requirements of HDTV. The way we sample the analog signal is the same in principle; we just use higher channel bandwidths and sample rates. The way we process the digital signal is the same in principle; we just handle higher data rates,and take greater care with system design. Everything along the line operates at faster data rates and higher bandwidths, butalmost every principle is familiar.

There are a wide variety of formats within high-definition television.This gives the broadcast engineer a wide range of flexibility, but itseemingly increases the complexity of the broadcast system.

Standards define the scanning format, analog interface, parallel digital interface, and the serial digital interface for creating and handling high-definition video. Key standards of interest include:

ANSI/SMPTE 240M, Television – Signal Parameters – 1125-Line High-Definition Production Systems. Defines the basic characteristics of analog video signals associated with origination equipment operating in 1125 (1035 active) production systems at 60 Hz and 59.94 Hz field rates.

SMPTE 260M, Television – Digital Representation and Bit-Parallel Interface – 1125/60 High-Definition ProductionSystem. Defines the digital representation of 1125/60 high-definition signal parameters defined in analog form byANSI/SMPTE 240M.

ANSI/SMPTE 274M, Television – 1920 x 1080 Scanning and Analog and Parallel Digital Interfaces for MultiplePicture Rates. Defines a family of scanning systems having an active picture area of 1920 pixels by 1080 lines and anaspect ratio of 16:9.

ANSI/SMPTE 292M, Television – Bit-Serial Digital Interfacefor High-Definition Television Systems. Defines the bit-serialdigital coaxial and fiber-optic interface for high-definition component signals operating at 1.485 Gb/s and 1.485/1.001 Gb/s.

ANSI/SMPTE 296M, Television – 1280 x 720 Scanning,Analog and Digital Representation and Analog Interface.Defines a family of progressive scan formats having an activepicture area of 1280 pixels by 720 lines and an aspect ratio of 16:9.

ANSI/SMPTE 372M, Television – Dual Link 292.Defines a method for carrying 1080i/p YCbCr formats andRGBA 1080i/p formats in either 10- or 12-bit formats via two HD-SDI links.

ANSI/SMPTE 424M, Television – 3 Gb/s Signal/Data SerialInterface. Defines a method for transporting 3 Gb/s serial digitalsignal over a coaxial interface.

ANSI/SMPTE 425M, Television – 3 Gb/s Signal/Data SerialInterface – Source Image Format Mapping. Defines themethod of transporting 1920x1080 and 2048x1080 picture formats over a single transport interface of 3 Gb/s.

Page 19: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

15www.tektronix.com/video_audio

Figure 16. Ancillary data in the digital line vs. analog representation.

Typical analog video bandwidth of high-definition video red, green,and blue components is 30 MHz for 1080 formats (interlaced andprogressive [30, 29.97,25, 24, 23.98]) and 720 progressive scanformats and 60 MHz for a 1080p (50,59.94,60) progressive formats.Therefore, a high sample rate is required to digitize the matrixedluma and color-difference signals. The sample rate for the 30 MHzluma Y channel is 74.25 MHz and half that rate, 37.125 MHz, isused to sample each of the 15 MHz color-difference signals C'band C'r. The signals are sampled with 10 bits of resolution. C'b andC'r are matrixed into a single stream of 10-bit parallel data at 74.25Mb/s, then matrixed with the 74.25 Mb/s luma data creating a 10-bit parallel data stream at 148.5 Mb/s in word order C'b, Y', C'r,Y', the same as standard definition. Just as in standard definition,the parallel data is then serialized, in this case, to a scrambled,NRZI, 1.485 Gb/s data stream for transmission within the studioplant. Please note that for non-even frame rates of 59.94Hz,29.97Hz, 23.98Hz the clock rate should be divided by 1.001.

Chroma and luma quantization (refer back to Figures 9 and 10) is the same for standard definition and high-definition signals anddecimal 10-bit codewords 0, 1, 2, 3 and 1020, 1021, 1022, and1023 are still excluded values. The codewords for EAV and SAVhave the same functionality for standard and high-definition.Additional words follow EAV in high-definition formats to numberindividual lines and provide line-by-line error checking of luma andthe two color-difference channels.

Formatting of data in the video line is shown in Figure 16, which also illustrates the timing relationship with analog high-definition video.

In high-definition formats, the four-word EAV sequence is immediate-ly followed by a two-word line number (LN0 and LN1): followed by a two-word CRC (YCR0 and YCR1). The first of these is a linecounter which is an 11-bit binary value distributed in two datawords, LN0 and LN1, as shown in Table 3. For example, for line1125, the two data words would have the value LN0 = 394h andLN1 = 220h, for a binary data word 10001100101.

Word 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

LN0 Not B8 L6 L5 L4 L3 L2 L1 L0 R R(0) (0)

LN1 Not B8 R R R L10 L9 L8 L7 R R(0) (0 (0) (0) (0)

Table 3. Bit Distribution of Line Number Word.

Page 20: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

16 www.tektronix.com/video_audio

Figure 17. Spatial layout of the digital frame with V, F, and H-bit values.

Word 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

YCR0 Not B8 CRC8 CRC7 CRC6 CRC5 CRC4 CRC3 CRC2 CRC1 CRC0

YCR1 Not B8 CRC17 CRC16 CRC15 CRC14 CRC13 CRC12 CRC11 CRC10 CRC9

CCR0 Not B8 CRC8 CRC7 CRC6 CRC5 CRC4 CRC3 CRC2 CRC1 CRC0

CCR1 Not B8 CRC17 CRC16 CRC15 CRC14 CRC13 CRC12 CRC11 CRC10 CRC9

Table 4. Bit Distribution of Words Making Up Luma and Chroma CRCs in High-Definition Formats.

CRC checking, in high-definition, is done separately for luma andchroma on each line. A CRC value is used to detect errors in thedigital active line by means of the calculation CRC(X) = X18 + X5 +X4 + 1 with an initial value of zero at the start of the first active lineword and ends at the final word of the line number. The value isthen distributed as shown in Table 4. A value is calculated for luma YCR0 and YCR1, and another value, CCR0 and CCR1, is calculated for color-difference data.

Luma and chroma CRC values can be displayed on the measure-ment instrument and used for determination of any errors accumulating within the signal as it travels from point to point.

In standard definition formats, EAV ends with the xyz word; there is no line numbering. A CRC for active picture, and a CRC for thecomplete field (excluding the time set aside for vertical interval signal switching), is optionally done once per field in the verticalblanking interval as described in SMPTE RP-165.

All words in the digital line horizontal blanking area between EAVand SAV (Figure 17) are set to black (Y' = 040h, C'b and C'r =200h) if not used for ancillary data.

Page 21: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

17www.tektronix.com/video_audio

Timing and SynchronizationStandards provide information that allows interchange and interoperability among the various devices in the end-to-end videochain. Good standards allow economical utilization of resourcesand technologies. Standards promote cooperation among usersand encourage innovation. Standards are necessary if the videoprofessional and the home viewer are to produce and view thesame program.

The American National Standards Institute, Society of MotionPicture and Television Engineers, Audio Engineering Society, andInternational Telecommunications Union publish the reference stan-dards and recommendations for video and audio. Representativestandards and recommendations, listed in Appendix D – ReferenceStandards for Television, define signal parameters that allow com-patibility and regulatory compliance. Standards issued by thesebodies are developed with great care, and are very helpful indescribing the precise characteristics of each system. The following discussion is an interpretation of those standards to provide a broad understanding of many different individuallystandardized formats.

Successful creation, transmission, and recovery of a video picturedepend on each device in the system operating in synchronizationwith every other device. As the television camera detects the valueof a picture element at a certain position in the scene, it mustsomehow identify where that value is to finally be reproduced onthe television display. Synchronizing elements tell the camera howto produce a picture in concert with other cameras and sourcesand tell the receiver how and where to place the picture on thescreen when the picture is finally displayed.

The camera, and finally, the display know how to scan the detectoror screen. They just need to know where to start, and how to keepin step. The synchronizing information is refreshed once each horizontal line and once each vertical sweep of the display (twosweeps for each full picture in a 2:1 interlaced format). Inside alarge studio plant, synchronizing information is provided by anexternal master synchronizing generator. In a small system, onecamera may provide synchronizing information for itself and othervideo sources as well.

Analog video timingThere are six standard definition composite analog video formats in common use: PAL, PAL-M, PAL-N, NTSC with setup, NTSCwithout setup, and SECAM. Additionally, some countries permit awider on-air transmission bandwidth, leaving room for higher videobandwidth. Studio production in SECAM countries is often done in component or PAL, then formatted into SECAM for transmission.SECAM and PAL video formats are similar with the difference primarily in the way the chroma information is modulated onto theluma video.

Studio video is a continuous stream of information that may beused as it occurs, delayed to match other sources, or recorded forplayback later. Whenever it moves, it moves in real time, and itmust carry along all of the information necessary to create a pictureat the destination. Video contains picture information and timinginformation to properly reproduce the picture. Timing informationincludes a pattern of regularly occurring horizontal sync pulses orreserved data words that identify each line of video, interrupted byless frequently occurring vertical sync information that instructs the display to start writing the picture at the top of the screen.

In NTSC or PAL composite video formats, video and timing information can be easily observed. A video waveform monitor is equipped with preset sweep rate selections to display video horizontal lines, the horizontal blanking interval, a sweep of all picture lines (vertical rate), or just the lines in the vertical blankinginterval. It is important to recognize these displays are all of thesame video signal, the difference being when the signal is displayedand for how long each time. In modern terms, composite analogvideo is a time-division multiplex of luminance video and synchronizing information. The chrominance information is a frequency-division multiplex of the two color-difference channels. Just look for what you want when it occurs.

Page 22: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

18 www.tektronix.com/video_audio

Horizontal timingHorizontal timing diagrams for 525/59.94NTSC (Figure 18) and 625/50 PAL (Figure19) scanning formats are similar in concept,and were developed with the constraints ofcamera and display devices available in themid 1900s. The horizontal blanking intervaloccurs once per line of video informationand is modified to provide the verticalblanking interval.

The horizontal FRONT PORCH defines atime for the video in each line to end as thebeam approaches the right of the screen.The 50% point on the falling edge of thesync pulse, the system timing reference,can then trigger retrace of the picture tubebeam. The SYNC TO BLANKING ENDassures that video won’t start illuminatingthe screen while the beam is still retracing.The REFERENCE WHITE and REFERENCEBLACK levels are specified to assure everyprogram will appear on the display at thesame maximum and minimum brightnessfor a constant contrast without vieweradjustment. The 7.5 IRE difference in setup(the difference in blanking and black levels)in the NTSC format has been the subject ofsome discussion over the years and somecountries operate with no setup. The colorsubcarrier burst provides a periodic stablereference for synchronizing the receivercolor oscillator for stable demodulation ofchroma information. Although the subcarri-er burst is an eight- to ten-cycle sample ofa constant frequency, the waveform moni-tor will be locked to the horizontal syncpulse timing reference and the NTSC burstwill appear to alternate in phase from line toline and, because of a 25 Hz frequency offset, the PAL burst will appear to be constantly changing. Sync edge timing reference and the color subcarrier burst are individually their own constant phase;they will appear to alternate or be changingbecause they come into step with eachother only periodically.

Figure 19. PAL horizontal blanking interval.

Figure 18. NTSC horizontal blanking interval.

Page 23: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

19www.tektronix.com/video_audio

A line of analog video starts at the 50%point of the falling edge of the bi-level syncpulse and ends at the same point in thenext horizontal video line. High-definitionanalog production formats may use a tri-level sync timing pulse extending firstbelow, then above blanking level. Timingreference, 0H, for analog tri-level sync is the positive-going transition of the syncwaveform through blanking level (Figure 20and Table 5).

The spatial relationship of the timing signalsto the picture time of the video signal isillustrated in Figure 21. For a progressive1:1 format, the complete picture (the frame)is scanned from top to bottom, includingevery picture line in one pass. In interlaced2:1 formats, the first pass from top to bottom will write half the lines with each line spaced vertically, and the second pass will be offset to fill in a new field (andcomplete the frame) between the lines ofthe previous pass.

Figure 20. High-definition line timing.

Figure 21. Spatial layout of the video frame.

Page 24: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

20 www.tektronix.com/video_audio

Vertical timingVertical timing information is a change in the shape of regularlyoccurring horizontal synchronizing pulses and addition of equaliz-ing pulses. The vertical blanking interval (Figure 22 NTSC, Figure23 PAL) is 20 to 25 video lines in time duration and is displayedcenter screen in the waveform monitor two-field display. Thelonger vertical blanking time allows the slower vertical return of the picture tube electron beam to the top of the screen.

The different patterns illustrated above and on the next page startthe video line at left or middle at the top of the screen to provide a2:1 interlace of the fields in PAL and NTSC formats. Frequenciesare chosen to reduce visibility of the color subcarrier information,which is running at a visible video frequency. It takes eight fieldsfor everything to come to the original phase relationship (a complete color frame) for a PAL signal, four fields for NTSC.

SamplingFrequency

Format (MHz) (1/T) A B C D E

1920x1080 60 1:1 148.5 44T 148T 280T 1920T 2200T

1920x1080 59.94 1:1 148.5/1.001 44T 148T 280T 1920T 2200T

1920x1080 60 2:1 74.25 44T 148T 280T 1920T 2200T

1920x1080 59.94 2:1 74.25/1.001 44T 148T 280T 1920T 2200T

1920x1080 30 1:1 74.25 44T 148T 280T 1920T 2200T

1920x1080 29.97 1:1 74.25/1.001 44T 148T 280T 1920T 2200T

1920x1080 50 1:1 148.5 484T 148T 720T 1920T 2640T

1920x1080 50 2:1 74.25 484T 148T 720T 1920T 2640T

1920x1080 25 1:1 74.25 484T 148T 720T 1920T 2640T

1920x1080 24 1:1 74.25 594T 148T 830T 1920T 2750T

1920x1080 23.98 1:1 74.25/1.001 594T 148T 830T 1920T 2750T

1280x720 60 1:1 74.25 70T 220T 370T 1280T 1650T

1280x720 59.94 1:1 74.25/1.001 70T 220T 370T 1280T 1650T

1280x720 50 1:1 74.25 400T 220T 700T 1280T 1980T

1280x720 30 1:1 74.25 1720T 220T 2020T 1280T 3300T

1280x720 29.97 1:1 74.25/1.001 1720T 220T 2020T 1280T 3300T

1280x720 25 1:1 74.25 2380T 220T 2680T 1280T 3960T

1280x720 24 1:1 74.25 2545T 220T 2845T 1280T 4125T

1280x720 23.98 74.25/1.001 2545T 220T 2845T 1280T 4125T

Table 5. High-Definition Line Timing in Sampling Clock Cycles (T).

Page 25: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

21www.tektronix.com/video_audio

Figure 22 shows the alternating fields, and the four-field NTSCcolor frame. The color subcarrier comes back into the same relationship with the vertical sync after four fields.

The PAL vertical blanking interval, Figure 23, shows the alternatingsynchronizing patterns creating the interlaced frame. Because ofthe 25 Hz offset, the PAL subcarrier phase comes into the samerelationship with the vertical sync every eight fields, for an eight-field color frame. SECAM horizontal and vertical sync timing issimilar to PAL, but differs in the way chroma is modulated onto

the luminance signal.

The phase relationship between the PAL or NTSC vertical syncpattern identifying the correct field, and the color subcarrier phaseis important when one source video signal joins or is suddenlyreplaced by another source, as when edited or switched or combined by special effects equipment.

Figure 22. NTSC vertical blanking interval.

Page 26: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

22 www.tektronix.com/video_audio

This important relationship is referred to as SCH or Subcarrier-to-Horizontal phase. For component video we need only be concerned with the correct positioning of the three channels thatmake up the color picture as chroma information is not representedby a modulated subcarrier.

Line numbering in NTSC starts with the first vertical equalizingpulse after the last full line of video and continues through eachfield (263 lines for field one and three, 262 lines for field two andfour). Line numbering for PAL and most analog high-definition for-mats starts with the first broad pulse after the last video half-lineand the count continues through the full frame (625 lines for PAL).

In high-definition, there are progressive and interlaced scanning formats as shown in Figure 24. The five lines of the vertical intervalbroad pulses are slightly different than those of standard definitionbecause of the tri-level sync pulse used in high-definition. The progressive format’s vertical interval of 1080p (SMPTE 274M) isshown with appropriate line numbers. The interlaced line numbersof the 1080i format (SMPTE 274M) and 1035i format (SMPTE240M) are shown.

Figure 23. PAL vertical blanking interval.

Page 27: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

23www.tektronix.com/video_audio

Figure 24. Analog high-definition vertical blanking interval.

Page 28: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

24 www.tektronix.com/video_audio

Analog high-definition component video parametersANSI/SMPTE 240M defines analog high-definition video in 1125/60 (59.94)/2:1 format. ITU-R BT.709 (Part 1) recognizes both1125/60/2:1 and 1250/50/2:1. (However, 1250/50/2:1 format is no longer used). These analog rates are shown in Table 6, alongwith some timings relative to their digital counterparts.

Tektronix has developed a simple proprietary method for timing ofan analog and digital facility within the WFM and WVR series ofwaveform monitors and rasterizers. The Timing display providesboth a simple graphical rectangle window which shows the relativetiming between the external reference and input signal and meas-urement readouts in line and microseconds (µs) of the differencebetween the two signals as shown in Figure 25. The input signalcan either be a HD-SDI, SD-SDI or analog composite signal andthe input timing is compared to the analog black burst or tri-levelsync external reference input.

The rectangle display represents one frame for SDI inputs, or acolor frame for composite analog inputs. The crosshair in the center is zero offset and the circle represent the timing of the input signal. Lines of advance or delay are shown as vertical displacement while timing errors of less than a line are shown ashorizontal displacement as shown in Figure 27. If the input is at thesame time as the reference then the circle will be centered on thecrosshair and it will change color from white to green.

The Relative to box indicates the chosen zero point for the timingdisplay. The default is the rear panel. In this mode the offset is zero when the input and reference are at the same timing at therear panel of the instrument. The other choice is to use the Savedoffset, in this mode you can save the timing from one of the inputsignals and then display the timing relative to that saved offset. This

Figure 25. WVR7120/WFM7120 Series timing display.

1125/60/2:1 1125/50/2:1 750/60/1:1 750/50/1:1(1125/59.94/2:1) (750/59.94/1:1)

Sync Type Tri-level polar Tri-level polar Tri-level polar Tri-level polar

Horizontal Timing Timing 50% point, Timing 50% point, Timing 50% point, Timing 50% point, Rising 50% edge Rising 50% edge Rising 50% edge Rising 50% edge Rising 50% edge

Total Lines/Frame 1125 1125 750 750

Active Video Lines/Frame 1080 1080 720 720

Field Frequency 60 (59.94) Hz 50 Hz 60 (59.94) Hz 50 Hz

Line Frequency 33.750 28.125 kHz 45 kHz 37.5 kHz(33.7163) kHz (44.955) kHz

Line Period 29.6296 µs 35.556 µs 22.222 µs 22.667 µs (29.6593) µs (22.244467) µs

Line Blanking 3.771 ms 9.697 ms 4.983 ms 9.428 ms(3.775) ms (4.988) ms

Timing Reference to SAV 2.586 ms 2.586 ms 3.502 ms 3.502 ms(2.589) ms (3.505) ms

EAV to Timing Reference 1.185 ms 7.084 ms 1.481 ms 5.926 ms

Negative Sync Width 0.593 ms 0.593 ms 0.538 ms 0.538 ms

Positive Sync Width 0.593 ms 0.593 ms 0.538 ms 0.538 ms

Sync Amplitude ±300 mV ±300 mV ±300 mV ±300 mV

Sync Rise/Fall 0.054 ms 0.054 ms 0.054 ms 0.054 ms

Field Period 16.67 20 ms 16.67 20 ms (16.68) ms (16.68) ms

Field Blanking 45 lines 45 lines 30 lines 30 lines

Video Signal Amplitude 700 mV 700 mV 700 mV 700 mV

Nominal Signal Bandwidth 30 MHz R, G, B 30 MHz 30 MHz R, G, B 30 MHz 30 MHz R, G, B 30 MHz 30 MHz R, G, B 30 MHz

Table 6. Analog High-Definition Timing Parameters with Selected Digital Relationships.

Page 29: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

25www.tektronix.com/video_audio

is especially useful in timing together the inputs to a router. Byselecting one of the inputs to the router as the master and applying this signal to the input along with the external referencesignal being used by the router to the WVR series or theWFM7x20/6120 series. Once this measurement is obtained, savethe timing offset for the master input and use the relative to savedoffset mode. Now by selecting each of the other inputs to therouter via the WVR series or the WFM7x20/6120 series, themeasurement will show the relative offset between the master reference and the other video inputs. Simply adjust the horizontaland vertical timing controls of the input signal until the circle andthe crosshair are overlaid and the circle turns green. Fine timingadjustment can be done directly from the number readouts of the right hand side of the display. Next, each of the inputs to therouter is timed relative to the master input signal. This intuitive dis-play can save considerable time in the timing of video systems.

Digital Studio ScanningFormatsIt is apparent that video scanning standards can be written for a variety of formats. In practice, standards reflectwhat is possible with the goal of compatibility throughout an industry. At this time there is no one universalscanning format for standard or forhigh-definition television but there is a trend towards making the televisionreceiver compatible with all of the scanning systems likely to be availablewithin a region. This creates a uniqueproblem for the video professional whomust produce programs for a world-wide market.

Some digital rates are particularly well suited to standards conversion. ITU-R BT.709 Part 2 defines a digital, square pixel,common image format (CIF) with common picture parameter values independent of picture rate. This recommendation specifiespicture rates of 60, 59.94, 50, 30, 29.97, 25, 24, and 23.976 Hz,all with 1080 active picture lines each with 1920 picture samplesand an aspect ratio of 16 wide by 9 high. SMPTE RP 211extends SMPTE 274M, the 1920x1080 family of raster scanningsystems, implementing segmented frames for 1920 x 1080 in 30, 29.97, 25, 24, and 23.976 Hz production formats. These CIF rates are the 1920x1080 rates in Table 7. 1280x720 rates in this table are defined by ANSI/SMPTE 296M. SMPTE 293Mdefines 720x483 progressive rates. Note that the frame rates andsampling frequencies listed in this table have been rounded to twoor three decimal places. For non-integer frame rate systems theexact frame and sampling frequency is the complementary integerrate divided by 1.001.

Segmented frame production formatsSeveral formats in the scanning formats table are nomenclated1:1sF. The “sF” designates a “segmented frames” format perSMPTE recommended practice RP211. In segmented frame formats, the picture is captured as a frame in one scan, as in progressive formats, but transmitted as in an interlaced format with even lines in one field then odd lines in the next field. Theassignment of lines is the same as in an interlaced system, but the picture is captured for both fields in one pass eliminating spatial mis-registration that occurs with movement in an interlacedsystem. This gives the advantages of progressive scan but reduces the amount of signal processing required and doublesthe presentation rate (reducing 24 to 30 Hz visual flicker) in theanalog domain. Segmented frame formats may be handled as is,or may be easily converted to progressive formats as shown in

Figure 26. Conversion of a progressive frame into segments.

Figure 27. Interpretation of Timing Display.

Page 30: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

26 www.tektronix.com/video_audio

Luma or Luma or Luma

R'G'B' Active R'G'B' Samples Analog Total

Samples Lines Frame Sampling per Sync Lines

System per Active per Rate Scanning Frequency Total Time perNomenclature Line Frame (Hz) Format (MHz) Line Ref Word Frame

1920x1080/60/1:1 1920 1080 60.00 Progressive 148.500 2200 2008 1125

1920x1080/59.94/1:1 1920 1080 59.94 Progressive 148.352 2200 2008 1125

1920x1080/50/1:1 1920 1080 50.00 Progressive 148.500 2640 2448 1125

1920x1080/60/2:1 1920 1080 30.00 2:1 Interlace 74.250 2200 2008 1125

1920x1080/59.94/2:1 1920 1080 29.97 2:1 Interlace 74.176 2200 2008 1125

1920x1080/50/2:1 1920 1080 25.00 2:1 Interlace 74.250 2640 2448 1125

1920x1080/30/1:1 1920 1080 30.00 Progressive 74.250 2200 2008 1125

1920x1080/29.97/1:1 1920 1080 29.97 Progressive 74.176 2200 2008 1125

1920x1080/25/1:1 1920 1080 25.00 Progressive 74.250 2640 2448 1125

1920x1080/24/1:1 1920 1080 24.00 Progressive 74.250 2750 2558 1125

1920x1080/23.98/1:1 1920 1080 23.98 Progressive 74.176 2750 2558 1125

1920x1080/30/1:1sF 1920 1080 30 Prog. sF 74.250 2200 2008 1125

1920x1080/29.97/1:1sF 1920 1080 29.97 Prog. sF 74.176 2200 2008 1125

1920x1080/25/1:1sF 1920 1080 25 Prog. sF 74.250 2640 2448 1125

1920x1080/24/1:1sF 1920 1080 24 Prog. sF 74.250 2750 2558 1125

1920x1080/23.98/1:1sF 1920 1080 23.98 Prog. sF 74.176 2750 2558 1125

1280x720/60/1:1 1280 720 60.00 Progressive 74.250 1650 1390 750

1280x720/59.94/1:1 1280 720 59.94 Progressive 74.176 1650 1390 750

1280x720/50/1:1 1280 720 50.00 Progressive 74.250 1980 1720 750

1280x720/30/1:1 1280 720 30.00 Progressive 74.250 3300 3040 750

1280x720/29.97/1:1 1280 720 29.97 Progressive 74.176 3300 3040 750

1280x720/25/1:1 1280 720 25.00 Progressive 74.250 3960 3700 750

1280x720/24/1:1 1280 720 24.00 Progressive 74.250 4125 3865 750

1280x720/23.98/1:1 1280 720 23.98 Progressive 74.176 4125 3865 750

625/50/2:1 (BT.601) 720 581 25.00 2:1 Interlace 13.500 864 732 625

525/59.94/2:1 (BT.601) 720 483 29.97 2:1 Interlace 13.500 858 736 525

720x483/59.94/1:1/4:2:2 720 483 59.94 Progressive 2 x 13.500 858 736 525

720x483/59.94/1:1/4:2:0 720 483 59.94 Progressive 18.000 858 736 525

Table 7. Scanning Formats for Studio Digital Video.

Page 31: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

27www.tektronix.com/video_audio

It is apparent from the review of analog formats that lots of non-video time is assigned just to pass along the synchronizing information and wait for the picture tube to properly retrace thebeam. In a digital component studio format, sync is a shortreserved-word pattern, and the balance of this time can be usedfor multi-channel audio, error check sums, and other ancillary data.Using a digital waveform monitor in PASS mode, these short digitaltiming packets appear to be short pulses at each end of the horizontal line of the decoded video waveform (Figure 28, also see Figure 11). Ringing will appear in the analog representationbecause the data words occur at the clock rate, well beyond thebandpass of the analog display system. The DAT option for theWFM7120/6120 provides a logic level DATA view (Figure 29) ofthese data words, precisely identifying each word and its value.

It is important to keep several interesting timing definitions in mindwhen comparing analog and digital video:

1. A line of digital video starts with the first word of the EAV (End of Active Video) data packet, 3FF, and ends with the last word of video data in the line. Digital line numbering starts with the first line of vertical blanking.

2. The sample numbers in the digital video line start (sample 0) with the first word of active video, which is the first word after the four-word pattern of the SAV sequence. So the line number does not change at the same time as the sample number goes back to zero.

3. Unlike digital timing, the analog line starts and ends at the timing reference point; the 50% point of the leading edge of bi-level sync, or the positive-going zero crossing for tri-level sync. The analog timing reference, then, is after the digital timing reference and before the digital line first sample, during the time allocated for ancillary data when the signal is digitized. The digital sample word corresponding to the analog timing reference is specified by the digital standard.

Digital video synchronization is provided by EAV and SAVsequences which start with a unique three-word pattern: 3FFh(all bits in the word set to one), 000h (all zeros), 000h (all zeros), followed by a fourth “xyz” word with the format described in Table 8.

Digital Studio Synchronization and Timing

Bit Number 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

Function Fixed F V H P3 P2 P1 P0 Fixed Fixed(1) (0) (0)

Figure 29. 270 Mb/s EAV timing reference packet viewed as multiplexed data.

Table 8. Format of EAV/SAV “xyz” Word.

Figure 28. 270 Mb/s EAV timing reference packet viewed as an analog luma channel signal.

Page 32: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

Field 525 625 1080P 1080i 1035i 720P SAV EAV 9 F V H P3 P2 P1 P0 1 0Line Line Line Line Line Line

Active Video 1 20-236 23-310 42-1121 21-560 41-557 26-745 200 1 0 0 0 0 0 0 0 0 0

274 0 0 1 1 1 0 1 0 0

Field Blanking 1 4-19, 1-22, 1-41, 1-20, 1-40, 1-25, 2AC 1 0 1 0 1 0 1 1 0 0

264-265 311-312 1122-1125 561-563 558-563 746-750 2D8 1 0 1 1 0 1 1 0 0 0

Active Video 2 283-525 336-623 NA 584-1123 603-1120 NA 31C 1 1 0 0 0 1 1 1 0 0

368 1 0 1 1 0 1 0 0 0

Field Blanking 2 1-3, 624-625, NA 1124-1125, 1121-1125, NA 3B0 1 1 1 0 1 1 0 0 0 0

266-282 313-335 564-583 564-602 3C4 1 1 1 1 0 0 0 1 0 0

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

28 www.tektronix.com/video_audio

The “xyz” word is a 10-bit word with the two least significant bits set to zero to survive a translation to and from an 8-bit system. Bits of the “xyz” word have the following functions:

Bit 9 – (Fixed bit) always fixed at 1

Bit 8 – (F-bit) always 0 in a progressive scan system; 0 for field one and 1 for field two of an interlaced system

Bit 7 – (V-bit) 1 in vertical blanking interval; 0 during active video lines

Bit 6 – (H-bit) 1 indicates the EAV sequence; 0 indicates the SAV sequence

Bits 5, 4, 3, 2 – (Protection bits) provide a limitederror correction of the data in the F, V, and H bits

Bits 1, 0 – (Fixed bits) set to zero to have identicalword value in 10 or 8 bit systems

Format F = 0 F = 1 V = 1 V = 0

1920x1080P Always = 0 NA Lines 1-41, Lines 42-11211122-1125

1280x720P Always = 0 NA Lines 1-25, Lines 26-745746-750

1920x1080I Lines 1-563 Lines 564-1125 Lines 1-20, Lines 41-557,561-583, 603-1120

1124-1125

1035I Lines 1-563 Lines 564-1125 Lines 1-40, Lines 41-557,558-602, 603-1120

1121-1125

525/60 Lines 4-255 Lines 1-3, Lines 1-19, Lines 20-263,256-525 264-282 283-525

625/50 Lines 1-312 Lines 313-625 Lines 1-22, Lines 23-310,311-335, 336-623624-625

Figure 30. “xyz” word binary display.

Table 9. Vertical Timing Information for the Digital Signal.

Table 10. Digital xyz Information for HD and SD formats.

Page 33: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

29www.tektronix.com/video_audio

Figure 31. High-definition digital vertical timing.

The xyz word in Figure 30 displays a binary value 1001110100,starting with bit 9, the most significant bit. In this example, bit 8, 7,and 6 indicate the xyz word is in field one of an interlaced format,in a line of active video, and in an EAV sequence. If we change thewaveform monitor to display the next field, the new binary xyz wordwould be 1101101000, with bit 8 changing to a binary 1. The protection bits 5, 4, 3, and 2 would also change to provide limitederror handling of the new binary word.

Several F-bit and V-bit examples following this xyz word pattern are provided in Table 9, and layout of the high-definition vertical intervalis illustrated in Figure 31.

Page 34: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

30 www.tektronix.com/video_audio

Telecine synchronizationThe transition to high-definition videohas provided several useful formats forthe mastering and archiving of programmaterial. For example, 1080 progres-sive at 23.976 Hz provides a means fora direct transfer of film frames to digitalfiles. The colorist only has to produceone master during the telecine transferprocess. This digital master can then beconverted to any other of the requireddistribution formats.

In order to synchronize this multiformatsystem, the standard reference used isNTSC black burst with a field frequencyof 59.94 Hz. In order to synchronizewith equipment operating at 23.976 Hz(24/1.001) or 48 kHz, the black burstsignal may carry an optional ten-fieldsequence for identification of the signalas specified in SMPTE 318M.

The timing reference synchronizing lineis shown in Figure 33 and is insertedon line 15 and 278 of a NTSC525/59.94 Hz signal. The first pulse (1)is always present at the start of the ten-field identification sequence. Pulses(2-5) which are between 0 and four-frame count pulses follow this. The endpulse (6) is always absent on line 15and always present on line 278. Table11 summarizes this information.

The Tektronix TG700 signal generatorplatform provides the ability to genlockto SMPTE 318M with the AGL7 analoggenlock module and provides SMPTE318M output references with the BG7black burst generator with CB color bar option.

Ten-Field Pulse LineSequence Position Position

1 2 3 4 5 6

0 1 0 0 0 0 0 Line 15 Field 1

1 1 0 0 0 0 1 Line 278 Field 2

2 1 1 0 0 0 0 Line 15 Field 1

3 1 1 0 0 0 1 Line 278 Field 2

4 1 1 1 0 0 0 Line 15 Field 1

5 1 1 1 0 0 1 Line 278 Field 2

6 1 1 1 1 0 0 Line 15 Field 1

7 1 1 1 1 0 1 Line 278 Field 2

8 1 1 1 1 1 0 Line 15 Field 1

9 1 1 1 1 1 1 Line 278 Field 2

Figure 33. SMPTE 318M timing reference synchronizing line.

Figure 32. High-definition telecine transfer process.

Table 11. SMPTE 318M Ten-field Timing Sequence.

Page 35: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

Dual Link and 3G Formats Film still continues to dominate within high-end production as the main acquisitionmedium, despite all the advances in digitalsignal processing. However, severaladvances have been made within digitalsignal processing that are allowing higherand higher image resolutions such as 2K or4K image formats to emulate the “film look”of the material. By having a high-resolutiondigital distribution master of the material,visual, effects, color correction and post-production process can be simplified. The final digital distribution master can be used to provide a wide range of duplication formats from Digital Cinema to HD and SD formats.

To achieve distribution of these high-resolution formats, various methods oftransmitting the signal between pieces of equipment are necessary. One methodto achieve this is by using multiple High-Definition (HD) Serial Digital Interfaces (SDI)such as defined in SMPTE 372M for theDual Link formats (See Table 12.). Anotherapproach is to multiplex the two virtual datastreams into a single 3 Gb/s signal that isstandardized by SMPTE 424M and 425M(See Table 13).

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

31www.tektronix.com/video_audio

Signal Format/sampling structure/pixel depth Frames/field rates

4:2:2 Y'C'bC'r 10-bit 60 59.94 and 50 Progressive

4:4:4 R'G'B' 10-bit4:4:4 R'G'B' + (A) 10-bit

4:4:4 Y'C'bC'r 10-bit4:4:4 Y'C'bC'r + (A) 10-bit

4:4:4 R'G'B' 12-bit

4:4:4 Y'C'bC'r 12-bit

4:2:2 Y'C'bC'r(A) 12-bit

30, 29.97, 25, 24, 23.98Progressive PsF

60 59.94 and 50 Interlaced

Mapping Image Signal Format/sampling Frames/fieldStructure Format structure/pixel depth rates

1 1920x1080 4:2:2 (Y’C’BC’R)/10-bit 60, 59.94 and 50 FramesProgressive

4:4:4 (R’G’B’), 60, 59.94 and 50 Frames4:4:4:4 (R’G’B’ +A)/10-bit Progressive

4:4:4 (Y’C’BC’R), 30, 29.97, 25, 24 and 23.984:4:4:4 (Y’C’BC’R+A)/10-bit Frames Progressive

4:4:4 (R’G’B’), 30, 29.97, 25, 24 and 23.98 4:4:4:4 (R’G’B’ +A)/10-bit Frames Progressive

4:4:4 (Y’C’BC’R),4:4:4:4 (Y’C’BC’R+A)/10-bit

4:4:4 (R’G’B’)/12-bit 60, 59.94 and 50 Fields Interlaced

4:4:4 (Y’C’BC’R)/12-bit 30, 329.97, 25, 24 and 23.98Frames Progressive

2048x1080 4:4:4 (X’Y’Z’)/12-bit 24 Frames Progressive, PsF

4:2:2 (Y’C’BC’R)/12-bit 30, 29.97, 25, 24 and 23.98Frames Progressive

60, 59.94 and 50 Fields Interlaced

2

3

4

1280x720

1920x1080

1920x1080

1920x1080

Table 12. Dual Link-supported formats defined in SMPTE 372M.

Table 13. 3 Gb/s Source Image Formats defined in SMPTE 425M.

Page 36: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

32 www.tektronix.com/video_audio

Digital 1 … 20 … 560 561 562 563 564 … 1123 1124 1125 1Interface

Link A 2 … 40 … 1120 1122 1124 1 3 … 1121 1123 1125 2

Link B 3 … 41 … 1121 1123 1125 2 4 … 1122 1124 1 3

Table 14. Progressive image format divided between Link A and Link B.

Link A C'b0: 0-9 Y'0: 0-9 C'r0: 0-9 Y'1: 0-9 C'b2: 0-9 Y'2: 0-9 C'r2: 0-9

Link B C'b0: 0-9 Y'0: 0-9 C'r0: 0-9 Y'1: 0-9 C'b2: 0-9 Y'2: 0-9 C'r2: 0-9

Table 15. Data structure of Link A and B for fast progressive formats.

Link A B'0: 0-9 G'0: 0-9 R'0: 0-9 G'1: 0-9 B'2: 0-9 G'2: 0-9 R'2: 0-9(even) (even) (even) (even)

Link B B'1: 0-9 A0: 0-9 R'1: 0-9 A1: 0-9 B'3: 0-9 A2: 0-9 R'3: 0-9(odd) (odd) (odd) (odd)

Table 16. Data structure for R'G'B' (A) 4:4:4:4 10-bit Dual Link format.

Figure 34. SDI data structure for a single line.

Page 37: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

For the Dual Link signals, the various formats are mapped into the two HD-SDI signals. Therefore, the various mapping structuresare constrained by the existing HD-SDI format. Figure 34 showshow the 10-bit sampled 4:2:2 Luma Y and Chroma C words aremultiplexed together in the HD-SDI signal.

Dual Link Fast Progressive Formats (1920x1080Y'C'bC'r 4:2:2 10-bit @ 60p, 59.94p, 50p)For the fast progressive Dual Link formats (60p, 59.94p and 50p),the image structure of these formats are identical to that of the HD-SDI format, except that the high-bandwidth signal must be broken up into the two links. The primary link is defined as “Link A”and the secondary link is defined as “Link B.” On an HD waveformmonitor, the various trace displays of each link look no differentthan a similar single 1920x1080 interlaced signal as shown inFigure 35.

Within this format it is important to understand that the originalimage was scanned as a full-frame progressive image and hasbeen divided between the two links for easy transport across anexisting HD-SDI infrastructure. Therefore, the mapping of the linesbetween the two links is characterized within the standard. Noticethe difference between how the image is divided up between the two digital fields of the HD-SDI signal as shown in Table 14. In order to maintain a constant data rate for these three fast progressive frame rates of 60/59.94p and 50p, the blanking interval is changed. For 60/59.94p a total of 2200 words are used per line, whereas in 50p format a total of 2640 words per line are used. Table 15 shows how each sample is transportedwithin each link.

R'G'B' 4:4:4 and R'G'B' (A) 4:4:4:4 10-bit (30,29.97, 25, 24, 23.98 Progressive PsF, 60 59.94and 50 Interlaced)The predominant use of the Dual Link format is to carry film-originated R'G'B' material at 23.98p/24p in order to maintain the quality of the original material. In this way there is no loss ofresolution in format conversion to Y'C'b C'r color space. However,the R'G'B' signal has a sampling structure of 4:4:4 and this structure has to be constrained to fit within the two 4:2:2 HD-SDIdata streams. To achieve this, Link A [Y'] data space is filled withthe G' channel and the [C'b/C'r] data space is filled with the even-numbered B' and R' channels, respectively. In Link B the [Y'] channel data space can be optionally filled with Alpha channel dataand the [C'b/C'r] data space is filled with the odd-numbered B'and R' channel samples as shown in Table 16. The Alpha channelcan be used to carry a data stream or, alternatively, can be used tocarry a key channel which can be used within the post-productionprocess for digital compositing. If the Alpha channel is not presentthen its value should be set to a blanking level of 64h. When eachof these Dual Link signals is viewed on a waveform monitor, theresulting waveform displays are formed as shown in Figure 36using the SIM option of the WFM7120, allowing both links to beviewed simultaneously. Notice the Y' channel values are of the correct levels but the C'b/C'r values are not representative of thetrue level of the signal and require that the two Dual Links signalsare combined into a single display for correct representation of the signal.

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

33www.tektronix.com/video_audio

Link A Link B

Figure 35. Link A and Link B of a 4:2:2 10-bit 1920x1080 59.94p SMPTE219 color

bar signal.

Link A Link B

Figure 36. Waveform displays of Dual Link A and B signal for R'G'B' (A) 4:4:4:4 format.

Page 38: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

34 www.tektronix.com/video_audio

Y'C'bC'r 4:4:4 and Y'C'bC'r (A) 4:4:4:4 10-bit(30, 29.97, 25, 24, 23.98 Progressive PsF, 6059.94 and 50 Interlaced)The structure of this format is similar to R'G'B' (A) 4:4:4:4 asshown in Table 17. Link A [Y'] data space is filled with the Y' channel and the [C'b/C'r] data space is filled with the even-numbered C'b and C'r channels, respectively. In Link B the [Y']channel data space can be optionally filled with Alpha channel data and the [C'b/C'r] data space is filled with the odd-numberedC'b and C'r channel samples. However, since this format conforms

to the Y'C'bC'r format of the HD-SDI data stream, Link A is representative of the signal and can be viewed on a HD waveformmonitor. The trace of the Link B signal is dependent on the valuepresent in the Alpha channel, as shown in the picture tile in Figure37. With the waveform monitor, it is possible to view the Alphachannel waveform traces by selecting the Alpha channel view in the picture menu of the instrument. In the WFM7120/7020, the signals can also be down-converted from the Dual Link signal into a single HD-SDI signal. This signal can be output from thewaveform monitor and can be used for simple monitoring applications, without requiring a Dual Link picture monitor.

R'G'B' 4:4:4 12-bit (30, 29.97, 25, 24, 23.98Progressive PsF, 60 59.94 and 50 Interlaced)To achieve a greater dynamic range for the signal, a 12-bit data format can be accommodated within the Dual Link standard. Theproblem here is that the data structure of each link conforms to 10-bit words. Therefore, a method has been defined to carry the12-bit data across multiple 10-bit words. In the case of R'G'B'4:4:4 12-bits, the most significant bits (MSBs) 2-11 are carriedwithin the 10-bit words. The additional two bits from each of theR'G'B' channels are combined into the Y' channel of Link B asshown in Table 18. Link A carries the G' channel bits 2-11 andeven sample values of B' and R' bits 2-11. In Link B the alphachannel is replaced by the combined bits 0-1 of the R'G'B' samples. The odd samples of the B' and R' bits 2-11 are carriedwithin the [C'b/C'r] words. The combined R'G'B' 0-1 data ismapped into the 10-bit word as defined in Table 19, where EP represents even parity for bits 7-0, the reserved values are set tozero and bit 9 is not bit 8.

Link A C'b0: 0-9 Y'0: 0-9 C'r0: 0-9 Y'1: 0-9 C'b2: 0-9 Y'2: 0-9 C'r2: 0-9(even) (even) (even) (even)

Link B C'b1: 0-9 A0: 0-9 C'r1: 0-9 A1: 0-9 C'b3: 0-9 A2: 0-9 C'r3: 0-9(odd) (odd) (odd) (odd)

Table 17. Data structure for Y'C'bC'r (A) 4:4:4:4 Dual Link format.

Link A Link B

Figure 37. Waveform displays of Dual Link A and B signals for Y'C'bC'r (A) 4:4:4:4

10-bit format.

Link A B'0: 2-11 G'0: 2-11 R'0: 2-11 G'1: 2-11 B'2: 0-9 G'2: 2-11 R'2: 2-11

Link B B'1: 2-11 R'G'B'0: 0-1 R'1: 2-11 R'G'B'1: 0-1 B'3: 0-9 R'G'B'2: 0-1 Rv3: 2-11

Bit Number

Word 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

Not EP G'n:1 G'n:0 B'n:1 B'n:0 R'n:1 R'n:0 Reserved ReservedB8

Table 19. Mapping structure for R'G'B' 0-1.

Table 18. Channel representation for RGB 12-bit.

Page 39: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

35www.tektronix.com/video_audio

The SIM option of the WFM7120 shows the two separate links inFigure 39. Notice how the Y’ channel of Link B does not resemblethe other waveform displays since it comprises the 0-1 bit data ofthe R’, G’, and B’ signals. Many people will already be familiar with10-bit values used within the SDI format, since this is in commonuse today. However, many users may not be used to dealing withthe video signal in 12-bit values. Therefore, the following diagram(Figure 38) provides some useful information regarding the levelvalue differences between 10-bit and 12-bit values.

Link A Link B

Figure 39. Waveform displays of Dual Link A and B signals for R'G'B' 4:4:4

12-bit format.

Figure 38. Representation of 12-bit and 10-bit data values.

Page 40: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

36 www.tektronix.com/video_audio

Link A C'b0: 2-11 Y'0: 2-11 C'r0: 2-11 Y'1: 2-11 C'b2: 0-9 Y'2: 2-11 R'2: 2-11

Link B C'b1: 2-11 Y'C'bC'r0: 0-1 C'r1: 2-11 Y'C'bC'r1: 0-1 C'b3: 0-9 Y'C'bC'r2: 0-1 R'3: 2-11

Bit Number

Word 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

Not EP Y'n:1 Y'n:0 C'b’n: C'b n:0 C'r n:1 C'r n:0 Res ResB8 1

Table 20. Channel representation for Y’C’bC’r 12-bit.

Link A Link B

Figure 40. Waveform displays of Dual Link A and B signals for Y’C’bC’r 4:4:4

12-bit format.

Table 21. Mapping structure for Y'C'bC'r 0-1.

Y'C'bC'r 4:4:4 12-bit (30, 29.97, 25, 24, 23.98Progressive PsF, 60 59.94 and 50 Interlaced)The structure of the Y'C'bC'r 12-bit data is similar to the G'B'R’12-bit structure where G' is equivalent to Y', B' is equivalent to C'band R' is equivalent to C'r. Table 20 shows the channel mappingfor the Y'C'bC'r samples and Table 21 shows the bit 0-1 mappingstructure within the 10-bit data word. Figure 40 shows the waveforms of both links using the SIM option on the WFM7120.

Page 41: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

37www.tektronix.com/video_audio

Link A C'b0: 2-11 Y 0: 2-11 C'r0: 2-11 Y'1: 2-11 C'b2: 0-9 Y'2: 2-11 R'2: 2-11

Link B A0 Y'C'bC'r0: 0-1 A1 Y'1: 0-1 A2 Y'C'bC'r2: 0-1 A3

Bit Number

Word 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

Not EP Y'n:1 Y'n:0 C'b n:1 C'b n:0 C'r n:1 C'r n:0 Res ResB8

Table 22. Channel representation for Y'C'bC'r (A) 4:2:2:4 12-bit.

Bit Number

Word 9 (MSB) 8 7 6 5 4 3 2 1 0 (LSB)

Not EP Y'n:1 Y'n:0 Res Res Res Res Res ResB8

Table 24. Mapping structure for Y' 0-1.

Table 23. Mapping structure for Y'C'bC'r 0-1.

Y'C'bC'r (A) 4:2:2:4 12-bit (30, 29.97, 25, 24,23.98 Progressive PsF, 60 59.94 and 50Interlaced)For those applications that need to transport the Alpha channeland Y’C’bC’r 12-bit data, the following data stream is defined for12-bit within the constraints of the 10-bit SDI structure. The MSBs

for Y'C'bC'r bits 2-11 are carried in Link A and conform to theC'bY'C'rY'* multiplex of the SDI signal. The 10-bit Alpha channeland the LSBs of theY'nC'bnC'rn and Yn+1 are carried in Link Band mapped according to the Table 22. The 0-1 bits of theY'C'bC'r samples are carried in the 10-bit word as defined in Table 23 and the additional Y' samples are mapped as shown in Table 24.

Page 42: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

38 www.tektronix.com/video_audio

Figure 41 shows the waveforms of both links using the SIM optionon the WFM7120. In this case, link A represents a standardY’C’bC’r signal, whereas the Y’ of link two just contains data bits of the 0-1 samples of Y’C’bC’r data and the C’b and C’r channels represent the Alpha data channel information.

To maintain the overall data rate of the SDI signal, the total numberof words per line is varied for each of the different formats toachieve a constant data rate. Table 25 shows the various linelengths for each of the Dual Link formats.

There are several challenges when dealing with Dual Link signalswithin a facility in part because the data is mapped between thetwo links. Within a video facility the two links can be routed alongdifferent paths. This can potentially introduce timing errors betweenthe two links. The SMPTE 372M standard defines an allowable timing difference of 40 ns between the two links at the source of the output from the device, but does not define an allowablemaximum difference for the timing between the two links.Therefore, it is important to check the specifications of each pieceof equipment to know the allowable range of timing difference atthe inputs to the device and to ensure that the electrical lengths ofthe paths carrying the two Dual Link signals are identical. In somecases, the internal buffer within the piece of equipment may beable to account for any inter-channel timing difference applied to its input. However, care should be taken not to exceed thespecification of the device, or the Dual Link signal may not be combined correctly. If this timing difference is exceeded, a shift will occur between the channels and the data will not be combined correctly.

Link A Link B

Figure 41. Waveform displays of Dual Link A and B signals for Y'C'bC'r (A) 4:2:2:4

12-bit format.

Frame/Field Rate Total Words per Line Total Active Words per Line

60 or 59.94 fields 2200 192030 or 29.97 frames

50 fields 2640 192025 frames

24 or 23.98 frames 2750 1920

Table 25. Line length structure of Dual Link formats.

Page 43: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

39www.tektronix.com/video_audio

Within the WFM and WVR series, the timing display shows the inter-channel timing difference between Link B with respect to Link A when a Dual Link signal is applied to the input as shown inFigure 43. In this case, a total of 161ns (12 clocks) was measuredas the inter-channel timing difference between Link B and Link A.Note that the inter-channel timing measurement measures the

timing between the two links themselves and does not directlyaffect the timing measurement between the reference and inputsignal. Within the instrument it is also possible to set up an alarm threshold for when the timing between the two channels exceedsa number of clock samples.

Another potential problem within the video facility is that the twolinks could become swapped at the input to the device or a linkcould be missing or corrupt. To verify the signal integrity, the VideoSession display of the WFM or WVR series can be used to quicklyidentify problems. The assumption within the instrument is that LinkA is the dominant signal and must be present in order for the DualLink signals to be combined correctly. If Link B has the incorrectformat or the wrong video payload identification, the Video Sessiondisplay will indicate a link error. When Link B is missing, the errormessage displays “partial dual link.” If the Link A and B are incorrectly connected to the instrument then the error message“Links swapped” will be displayed in the Video Session display,provided the video payload identification is present.

These types of path length and connection issues can be resolvedby migrating to a single SDI cable. However, doing so requires thatthe data rate of the signal be double in order to carry the entiredata stream. Advances in technology now allow for 3 Gb/s high-speed serial data to be carried on a single coaxial cable.

Figure 43. Inter-channel timing measurement of a Dual Link signal.

Figure 42. Video Session display showing incorrect connection of Dual Link signals.

Page 44: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

40 www.tektronix.com/video_audio

Data Stream 1 Y’0 Y’1 Y’2 Y’3 Y’4 Y’5

Data Stream 2 C’b0 C’r0 C’b1 C’r1 C’b2 C’r2

Table 26. Mapping tructure one of the virtual interfaces.

Figure 44. 3 Gb/s SDI data for mapping structure one.

3 Gb/s (SDI) High-Speed DataSMPTE has standardized the 3 Gb/s format within two documents:SMPTE 424M discusses the Data Serial Interface and SMPTE425M describes the Source Image Format Mapping. Table 14shows the supported mapping structure which is mapped slightlydifferently than the Dual Link mapping structure. The SDI signal hasan identical HD structure and contains two virtual interfaces intowhich the data is mapped. The definitions of EAV, SAV, Line Count(LN0,LN1 Table 3), and Checksum (CR0,CR1 Table 4) conform tothe HD-SDI signal standards.

Mapping Structure One:Fast Progressive Formats. (Y'C'bC'r 4:2:2 10-bit@ 60p, 59.94p, 50p)Data stream one of the virtual interface for a fast progressive formatcontains the Y Luma data and data stream two contains the Cchroma information as defined in Table 26. These two virtual interfaces are then multiplexed together to form the 10-bit parallelinterface which is then converted into the serial signal as shown in Figure 44.

Page 45: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

41www.tektronix.com/video_audio

The WFM7120 with the 3G option provides the ability to input a fastprogressive signal and displays the traditional waveform displays ofthe signal. A maximum of two waveform traces can be displayedsimultaneously along with the variety of picture and status displaysas shown in Figure 46. The Video Session display provides usefulinformation regarding the format of the signal and quickly identifiesthe format of the signal by using the video payload identifier (SMPTE352M) which should be present within the signal.

By using the data list display of the DAT option of the WFM7120,the user can view the data structure of the fast progressive signal in two ways. Either as a data format, that views the data in the twovirtual interfaces; or as the Video format, that shows the data as theway in which the final video signal is assembled.

Figure 46. Fast progressive 1080p 59.94 3 Gb/s level A color bar signal shown on

WFM7120.

Page 46: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

42 www.tektronix.com/video_audio

Figure 47. Mapping structure one for the fast progressive signals.

The Y data of the fast progressive signal is sampled at 148.5MHz or(148.5/ 1.001)MHz depending on the format. The color differencesignals are sampled at half the clock rate of 74.25MHz or (74.25/1.001)MHz for each C’b and C’r sample to produce the 4:2:2 sampling structure. Figure 47 shows how the Y’, C’b and C’r samples are combined into the two virtual interfaces. There are a total of 1920 (0-1919) samples for the active picture and theblanking width is changed for the various formats to maintain a constant data rate, Table 29 (page 44) shows the samples per linefor the various frame rates.

Level A and level BLevel A within the SMPTE 425M standard defines the specific directimage format mapping as initially discussed for the fast progressiveformat. This mapping structure is different than the Dual LinkSMPTE 372M standard. However, within the SMPTE 425M the provision is made to allow for the carriage of a Dual Link signalmapped into a 3 Gb/s signal and this is defined as Level B. In thiscase the data from Link A is mapped into virtual interface one andLink B information is mapped into virtual interface two. Figure 48shows how the Dual Link data is mapped into the two virtual interfaces of the 3 Gb/s signal. Within the data list display of theWFM7120, the data mode shows the data of both links A and B as transported over the 3 Gb/s interface (Figure 49).

Mapping Structure Two:4:4:4 R’G’B’/Y’C’bC’r and 4:4:4:4R’G’B’(A)/Y’C’bC’r(A) 10-bit signals(30, 29.97, 25, 24, 23.98 Progressive PsF, 6059.94 and 50 Interlaced)Mapping structure two supports the carriage of 4:4:4 sampledR’G’B’ or Y’C’bC’r data and has application for both 1080 and 720formats. Data stream one carries all of the G’ and R’ samples anddata stream two carries all of the Alpha and B’ samples. Each of the channels is sampled at 74.25MHz or 74.25MHz/1.001. In thecase of the YC’bC’r format the G samples are replaced by Y’ and the color difference values C’b/C’r are replace the B’/R’ samples, respectively.

Figure 48. Level B mapping structure for Dual Link fast progressive signals in a

3 Gb/s interface.

Figure 49. Data list display of a fast progressive level B signal.

Page 47: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

43www.tektronix.com/video_audio

Mapping Structure Three:4:4:4 R’G’B’/Y’C’bC’r 12-bit signals(30, 29.97, 25, 24, 23.98 Progressive PsF, 60 59.94 and 50 Interlaced)4:4:4 X’Y’Z’ 12-bit signals (24 FramesProgressive, PsF) Mapping structure three allows for 12-bit data to be carried withinthe SDI transport as either R’G’B’, Y’C’bC’r or X’Y’Z’ formats. The12-bit data represented as [11:0] has to be mapped into a 10-bit

structure and each 12-bit sample is separated into four parts ([11:9],[8:6],[5:3], [2:0]). Each of these values is then combined into a 10-bit word for each of the components R’G’B’, Y’C’bC’r or X’Y’Z’as defined in Table 27. These data wordsare then distributed across the two virtualinterfaces and the bits [11:9] and [5:3] are carried by virtual interface one. Theremaining data words [8:6] and [2:0] arecarried by virtual interface two as shown in Figure 51. In the case of the Y’C’bC’rformat the G’ samples are replaced by Y’and the color difference values C’b/C’r arereplace the B’/R’ samples, respectively. Indigital cinema application, a different colorspace of X’Y’Z’ is used to give a greaterdynamic range to the representation ofcolor to replicate the color depth availablefrom film. SMPTE 428 defines the variousparameters of this color space. In the caseof the X’Y’Z’ format the R’ samples arereplaced by X’, the G’ samples arereplaced by Y’ and the B’ samples are replaced by Z’.

Each of the channels is sampled at 74.25MHz or 74.25MHz/1.001.To maintain the constant 3 Gb/s data rate for the various supportedformats the blanking width is changed. Table 29 defines the totalwords per line for each of the formats.

Figure 50. (Level A) R’G’B’ mapping structure two.

Figure 51. Level A R’G’B’ mapping structure three.

Bit Number

9 8 7 6 5 4 3 2 1 0Data Stream

one first word of sample

(a)/(n)

----B8 R’(a)/(n) [11:9] G’(a)/(n) [11:9] B’(a)/(n) [11:9]

Data Stream one second

word of sample (a)/(n)

----B8 R’(a)/(n) [5:3] G’(a)/(n) [5:3] B’(a)/(n) [5:3]

Data Stream two first word

of sample (a)/(n)

----B8 R’(a)/(n) [8:6] G’(a)/(n) [8:6] B’(a)/(n) [8:6]

Data Stream two second

word of sample (a)/(n)

----B8 R’(a)/(n) [2:0] G’(a)/(n) [2:0] B’(a)/(n) [2:0]

Table 27. 12-Bit mapping structure of R’G’B into the 10-bit virtual interface.

Page 48: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

44 www.tektronix.com/video_audio

Mapping Structure Four:Y’C’bC’r 12-bit signals(30, 29.97, 25, 24, 23.98 Progressive PsF, 60 59.94 and 50 Interlaced)In order to map this 12-bit data into the 10-bit infrastructure of the SDI interface, the 12-bit data represented as [11:0] has to bedivided into different words. In mapping structure four, the first halfof the Y’ data bits [11:6] are carried in virtual interface one and thesubsequent Y’ data bits [5:0] are carried in the next packet of thevirtual interface one as shown in Table 21. Figure 52 shows howthe data packets are combined into the two virtual interfaces. Theluma signal (Y’) is sampled at 74.25MHz or 74.25MHz/1.001 andthe chroma channels (C’b/C’r) are sampled at half this rate of37.125MHz or 37.125MHz/1.001.

Figure 52. Level A Y’C’bC’r mapping structure four.

Bit Number

9 8 7 6 5 4 3 2 1 0Data Stream

one first word of sample

(a)/(n)

----B8

Data Stream one second

word of sample (a)/(n)

----B8

Data Stream two first word

of sample (a)/(n)

----B8

Data Stream two second

word of sample (a)/(n)

----B8

Table 28. 12-bit mapping structure of Y’ C’b C’r into the 10-bit virtual interface.

Reserved

Data Stream two third word

of sample (a)/(n)

Data Stream two fourth

word of sample (a)/(n)

----B8

----B8

Reserved

Reserved

Reserved

Reserved

Reserved

Y’(a)/(n) [11:6]

Y’(a)/(n) [5:0]

C’b (a)/(n) [11:6]

C’b (a)/(n) [5:0]

C’r (a)/(n) [11:6]

C’r (a)/(n) [5:0]

Frame Rate Total Words Total Active Words per Line per Line

24 or 23.98 4125 1280

25 3960 1280

30 or 29.97 3300 1280

50 1980 1280

60 or 59.94 1650 1280

24 or 23.98 2750 1920

24 or 23.98 2750 1920

25 2640 1920

30 or 29.97 2200 1920

24 or 24PsF 2750 2048

Table 29. Sampling structure of the video line for the various frame rates.

Page 49: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

One of the advantages of the digital interfaceis the ability to embed (multiplex) severalchannels of digital audio into the digitalvideo. This is particularly useful in large systems where separate routing of digitalaudio becomes a cost consideration andthe assurance that the audio is associatedwith the appropriate video is an advantage.In smaller systems, such as a post produc-tion suite, it is generally more economical tomaintain separate audio, thus eliminatingthe need for numerous multiplexer anddemultiplexer modules. Handling of digital audio is defined inANSI/SMPTE Standard 272M, Formatting AES/EBU Audio andAuxiliary Data into Digital Video Ancillary Data Space, for 525/60and 625/50 ANSI/SMPTE 259M formats, and in ANSI/SMPTE299M, 24-Bit Digital Audio Format for HDTV Bit-Serial Interface for ANSI/SMPTE 292M formats.

Two to sixteen AES/EBU audio channels are transmitted in pairsand combined where appropriate into groups of four channels.Each group is identified by a unique ancillary data ID. Audio is sampled at a video synchronous clock frequency of 48 kHz, or optionally at a synchronous or asynchronous rates from 32 kHz to 48 kHz.

Ancillary data is formatted into packets prior to multiplexing it intothe video data stream as shown in Figure 53. Each data block maycontain up to 255 user data words provided there is enough totaldata space available to include the seven (for component video)words of overhead. For composite digital, only the vertical syncbroad pulses have enough room for the full 255 words. Multipledata packets may be placed in individual data spaces.

At the beginning of each data packet is a header using word values that are excluded for digital video data and reserved for synchronizing purposes. For component video, a three-word header 000h, 3FFh, 3FFh is used. Each type of data packet is identified with a different Data ID word. Several different Data IDwords are defined to organize the various data packets used forembedded audio. The Data Block Number (DBN) is an optionalcounter that can be used to provide sequential order to ancillarydata packets allowing a receiver to determine if data is missing. As an example, with embedded audio, an interruption in the DBNsequence may be used to detect the occurrence of a vertical interval switch, thereby allowing the receiver to process the audiodata to remove the likely transient “click” or “pop.” Just prior to thedata is the Data Count word indicating the amount of data in thepacket. Finally, following the data is a checksum that is used todetect errors in the data packet.

Digital Audio

Figure 53. Ancillary data formatting.

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

45www.tektronix.com/video_audio

Page 50: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

46 www.tektronix.com/video_audio

Embedded audio in component digital videoEmbedded audio and available optionsare defined in ANSI/SMPTE Standard272M for standard definition andANSI/SMPTE 299M for high-definitionstudio digital formats. Please refer to the most current version of those documents. A basic embedded audio configuration with two AES channel-pairs as the source is shown in Figure 54.

The Audio Data Packet contains one ormore audio samples from up to fouraudio channels. 23 bits (20 audio bitsplus the C, U, and V bits) from eachAES sub-frame are mapped into three10-bit video words (X, X+1, X+2) asshown in Table 30.

Bit-9 is always the inverse of bit-8 toensure that none of the excluded wordvalues (3FFh through 3FCh or 003h

through 000h) are used. The Z-bit is setto “1” corresponding to the first frameof the 192-frame AES block. Channelsof embedded audio are essentially independent (although they are alwaystransmitted in pairs) so the Z-bit is setto a “1” in each channel even if derivedfrom the same AES source. C, U, and Vbits are mapped from the AES signal;however, the parity bit is not the AESparity bit. Bit-8 in word X+2 is even parity for bits 0-8 in all three words.

There are several restrictions regarding distribution of the audiodata packets although there is a “grandfather clause” in the standard to account for older equipment that may not observe all the restrictions. Audio data packets are not transmitted in thehorizontal ancillary data space following the normal vertical intervalswitch as defined in RP 168. They are also not transmitted in theancillary data space designated for error detection checkwordsdefined in RP 165. Taking into account these restrictions, datashould be distributed as evenly as possible throughout the videofield. This is important to minimize receiver buffer size for transmitting 24-bit audio in composite digital systems. This results in either three or four audio samples in each audio data packet.

Bit X X + 1 X + 2

b9 not b8 not b8 not b8

b8 AUD 5 AUD 14 Parity

b7 AUD 4 AUD 13 C

b6 AUD 3 AUD 12 U

b5 AUD 2 AUD 11 V

b4 AUD 1 AUD 10 AUD 19 (msb)

b3 AUD 0 AUD 9 AUD 18

b2 ch bit-1 AUD 8 AUD 17

b1 ch bit-2 AUD 7 AUD 16

B0 Z-bit AUD 6 AUD 15

Figure 54. Basic embedded audio.

Table 30. Embedded Audio Bit Distribution.

Page 51: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

47www.tektronix.com/video_audio

Extended embedded audioFull-featured embedded audio is definedin the aforementioned standards to include:

Carrying the 4 AES auxiliary bits(which may be used to extend theaudio samples to 24-bit)

Allowing non-synchronous clock operation

Allowing sampling other than 48 kHz

Providing audio-to-video delay information for each channel

Documenting Data IDs to allow up to16 channels of audio in componentdigital systems

Counting “audio frames” for 525 line systems

To provide these features, two additional datapackets are defined. Extended Data Packetscarry the 4 AES auxiliary bits formatted such that one video word contains the auxiliary data for two audio samples (Figure 55).

Extended data packets must be located in the same ancillary data space as the associated audio data packets and must follow the audio data packets.

The Audio Control Packet (shown in Figure56) is transmitted once per field in the second horizontal ancillary data space afterthe vertical interval switch point. It containsinformation on audio frame number, samplingfrequency, active channels, and relative audio-to-video delay of each channel.Transmission of audio control packets isoptional for 48 kHz synchronous operation and required for allother modes of operation (since it contains the information as towhat mode is being used).

Audio frame numbers are an artifact of 525 line, 29.97 frame/second operation. There are exactly 8008 audio samples in fiveframes, which means there is a non-integer number of samples per frame. An audio frame sequence is the number of frames for an integer number of samples (in this case five) and the audioframe number indicates where in the sequence a particular framebelongs. This is important when switching between sources

because certain equipment, most notably digital video recorders,require consistent synchronous operation to prevent bufferover/under flow. Where frequent switching is planned, receivingequipment can be designed to add or drop a sample following aswitch in the four out of five cases where the sequence is broken.The challenge in such a system is to detect that a switch hasoccurred. This can be facilitated by use of the data block numberin the ancillary data format structure and by including an optionalframe counter with the unused bits in the audio frame numberword of the audio control packet.

Figure 56. Audio control packet formatting.

Figure 55. Extended embedded audio.

Page 52: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

48 www.tektronix.com/video_audio

Audio delay information contained in the audio control packet usesa default channel-pair mode. That is, delay-A (DELA0-2) is for bothchannel 1 and channel 2 unless the delay for channel 2 is not equalto channel 1. In that case, the delay for channel 2 is located indelay-C. Sampling frequency must be the same for each channel ina pair, hence the data in “ACT” provides only two values, one forchannels 1 and 2 and the other for channels 3 and 4.

In order to provide for up to 16 channels of audio in componentdigital systems, the embedded audio is divided into audio groupscorresponding to the basic four-channel operation. Each of thethree data packet types are assigned four data IDs as shown inTable 31.

In component digital video, the receiver buffer in an audio demultiplexer is not a critical issue since there's much ancillary data space available and few lines excluding audio ancillary data.The case is considerably different for composite digital video due to the exclusion of data in equalizing pulses and, even more impor-tant, the data packet distribution required for extended audio. Forthis reason the standard requires a receiver buffer of 64 samplesper channel with a grandfather clause of 48 samples per channel towarn designers of the limitations in older equipment.

Systemizing AES/EBU audio Serial digital video and audio are becoming commonplace in production and post-production facilities as well as television stations. In many cases, the video and audio are married sources;and it may be desirable to keep them together and treat them asone data stream. This has, for one example, the advantage ofbeing able to keep the signals in the digital domain and switchthem together with a serial digital video routing switcher. In theoccasional instances where it’s desirable to break away some ofthe audio sources, the digital audio can be demultiplexed andswitched separately via an AES/EBU digital audio routing switcher.

At the receiving end, after the multiplexed audio has passedthrough a serial digital routing switcher, it may be necessary toextract the audio from the video so that editing, audio sweetening,or other processing can be accomplished. This requires a demultiplexer that strips off the AES/EBU audio from the serial digital video. The output of a typical demultiplexer has a serial digital video BNC as well as connectors for the two-stereo-pairAES/EBU digital audio signals.

Audio Audio Data Extended Audio ControlChannels Packet Data Packet Packet

Group 1 1-4 2FFh 1FEh 1EFh

Group 2 5-8 1FDh 2FCh 2EEh

Group 3 9-12 1FBh 2FAh 2EDh

Group 4 13-16 2F9h 1F8h 1ECh

Table 31. Data Identifiers for up to 16-Channel Operation of SD embedded audio.

Page 53: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

49www.tektronix.com/video_audio

Basic HD embedded audio There are some similarities and several differences in the implementation ofAES/EBU within an HD environment.The formatting of the ancillary datapackets is the same between SD andHD. The information contained withinthe user data is different because thefull 24 bits of audio data are sent as agroup and not split-up into 20 bits ofaudio data and an extended packet containing the 4 auxiliary bits. Therefore, the total number of bits used in HD is29 bits (compared with 23 bits in SD),the 24 bits of audio data are placed in4 ancillary data words along with C, V,U and Z-bit flag. Additionally, the CLKand ECC words are added to the pack-et as shown in Figure 57. Since the full24 bits of audio data are carried withinthe user data there is no extended datapacket used within HD.

Conformance to the ancillary data packet structure means thatthe Ancillary Data Flag (ADF) has a three-word value of000h,3FFh, 3FFh, as SMPTE 291M. The one-word DID (DataIdentification) have the following values to identify the appropriategroup of audio data as shown in Table 32. DBN is a one-wordvalue for data block number and DC is a one-word data countwhich is always 218h. The User Data Words (UDW) always contains 24 words of data and is structured as shown in Figure57. The first two words, UDW0 and UDW1 are used for audioclock phase data and provide a means to regenerate the audiosampling clock. The data within these two words provides a countof the number of video clocks between the first word of EAV andthe video sample corresponding to the audio sample.

Audio Audio Data Audio ControlChannels Packet Packet

Group 1 1-4 2E7h 1E3h

Group 2 5-8 1E6h 2E2h

Group 3 9-12 1E5h 2E1h

Group 4 13-16 2E4h 1E0h

Figure 57. Structure of HD audio data packet.

Table 32. Data Identifiers for up to 16-Channel Operation of HD embedded audio.

Page 54: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

50 www.tektronix.com/video_audio

Each audio data subframe is distributed across 4 UDW samplesas described in Table 33.

Note that the full preamble data is not carried within the 4 words,only a reference to the start of the 192 frame by use of the Z-bitindicator. Also, the parity bit is that used within the 32-bit sub-frame unlike standard definition.

The Error Correction Codes (ECC) is a set of 6 words that areused to detect errors within the first 24 words from ADF toUDW17. The value is calculated by applying the 8 bits of data B0-B7 of the 24 words through a BCH code information circuitthat produces the 6 words of the ECC (Error Correction Code.)

The ancillary data information is multiplexed within the color difference Cb/Cr data space only. Unlike the standard definitionstructure which applies the ancillary audio data across CbYCrY*,the Y data space is only used for the audio control packet thatoccurs once per field and is placed on the second line after theswitching point of the Y data. No ancillary data is placed withinthe signal on the line subsequent to the switching point. Theswitching point location is dependent on the format of the high-definition signals, for example in the 1125/60 system no ancillarydata is put on line 8.

Bit UDW2 UDW3 UDW4 UDW5

B9 NOT B8 NOT B8 NOT B8 NOT B8

B8 EVEN PARITY EVEN PARITY EVEN PARITY EVEN PARITY

B7 AUD1 3 AUD 111 AUD 119 P1

B6 AUD1 2 AUD 110 AUD 118 C1

B5 AUD1 1 AUD 1 9 AUD 1 17 U1

B4 AUD1 0 AUD 18 AUD 116 V1

B3 Z AUD1 7 AUD1 15 AUD1 23 (MSB)

B2 0 AUD1 6 AUD1 14 AUD1 22

B1 0 AUD1 5 AUD1 13 AUD1 21

B0 0 AUD1 4 AUD1 12 AUD1 20

Bit UDW6 UDW7 UDW8 UDW9

B9 NOT B8 NOT B8 NOT B8 NOT B8

B8 EVEN PARITY EVEN PARITY EVEN PARITY EVEN PARITY

B7 AUD2 3 AUD 2 11 AUD 219 P2

B6 AUD2 2 AUD 2 10 AUD 218 C2

B5 AUD2 1 AUD 2 9 AUD 2 17 U

B4 AUD2 0 AUD 2 8 AUD216 V2

B3 0 AUD2 7 AUD2 15 AUD2 23 (MSB)

B2 0 AUD2 6 AUD2 14 AUD2 22

B1 0 AUD2 5 AUD2 13 AUD2 21

B0 0 AUD2 4 AUD2 12 AUD2 20

Bit UDW10 UDW11 UDW12 UDW13

B9 NOT B8 NOT B8 NOT B8 NOT B8

B8 EVEN PARITY EVEN PARITY EVEN PARITY EVEN PARITY

B7 AUD3 3 AUD 311 AUD 319 P3

B6 AUD3 2 AUD 310 AUD 318 C3

B5 AUD3 1 AUD 3 9 AUD 3 17 U3

B4 AUD3 0 AUD 38 AUD 316 V3

B3 Z AUD3 7 AUD3 15 AUD3 23 (MSB)

B2 0 AUD3 6 AUD3 14 AUD3 22

B1 0 AUD3 5 AUD3 13 AUD3 21

B0 0 AUD3 4 AUD3 12 AUD3 20

Bit UDW14 UDW15 UDW16 UDW17

B9 NOT B8 NOT B8 NOT B8 NOT B8

B8 EVEN PARITY EVEN PARITY EVEN PARITY EVEN PARITY

B7 AUD4 3 AUD 4 11 AUD 419 P4

B6 AUD4 2 AUD 410 AUD 418 C4

B5 AUD4 1 AUD 4 9 AUD 4 17 U4

B4 AUD4 0 AUD 4 8 AUD416 V4

B3 0 AUD4 7 AUD4 15 AUD2 23 (MSB)

B2 0 AUD4 6 AUD4 14 AUD4 22

B1 0 AUD4 5 AUD4 13 AUD4 21

B0 0 AUD4 4 AUD4 12 AUD4 20

Table 33. Bit Assignment of audio data.

Page 55: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

51www.tektronix.com/video_audio

Audio control packetThe audio control packet carries additional information used in theprocess of decoding the audio data and has a similar structure to standard definition. Its structure is shown in Figure 58 and contains the following information. The Ancillary Data Flag has a three-word value of 000h, 3FFh, 3FFh. The one-word DID has the following values to identify the appropriate group of audio data as shown in Table 31 & 32. DBN is always 200h and DC is always10Bh. The UDW contains 11 words of data structured into five different types of data. The Audio Frame (AF) number data provides a sequential number of video frames to assist in indicating the position of the audio samples when using a non-integer number of audio samples per frame. The one-word value RATE indicates the sampling rate of the audio data andwhether the data is synchronous or asynchronous. The ACT wordindicates the number of active channels within the group. DELm-n

indicates the amount of accumulated audio processing delay relative to video measured in audio sample intervals for eachchannel pair 1&2 and 3&4. Figure 59 shows the decode audiocontrol packet display on the WVR series. This provides decodedinformation on the audio control packet data.

This is a slightly different format than that used in standard definition. The two-word value RSRV is reserved for future use atthis time.

Figure 58. Structure of audio control packet.

Figure 59. Audio Control Packet Display.

Page 56: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

52 www.tektronix.com/video_audio

How to monitor multi-channel audioAudio monitor has typically been done by monitoring the audio levels of the signal and ensuring they remain within reasonable limits. When stereo was introduced the need to monitor the interaction between the channels became important to ensure a correctly balanced stereo image. The phase (Lissajous) display is used to monitor the interaction of the two channels. (The AudioMonitoring application note 21W-16463-01 provides detail on howto use the Lissajous display.)

The development of multi-channel, surround sound audio technology has greatly enhanced the viewing experience. Surround-sound technology has emerged within digital television and digitalvideo technologies to create the home theater experience. Thecombination of enhanced picture quality and surround sound givesviewers a sense of total immersion and complete involvement in the program.

In audio production, a visual representation of the sound imagecomplements the auditory experience, helping audio engineers create the desired audio mix or more precisely adjust the audiocontent in post production. In broadcast facilities, such a visual display helps operators notice problems in multi-channel audiocontent more quickly and assist engineering in rapidly isolating the problem.

Audio channels in 5.1 surround sound For several years, the film industry has used a multi-channel audiosystem as a standard format for cinema-based audio. Increasingly,to reproduce this surround sound experience in the home and give consumers a more cinematic effect, 5.1 multi-channel audiohas replaced stereo in home entertainment systems. DVDs typically have 5.1 audio, and the television industry has starteddistributing and broadcasting this audio format in DTV systems. In conventional use, a 5.1 multi-channel audio system does not try to locate sound at precise, arbitrary locations. Rather, the different channels have particular roles (see Figure 60).

– The left (L) and right (R) channels drive the speaker pair in front of the listener (the mains) and carry most of the music. They typically operate like a stereo system.

– The center (C) channel primarily carries dialog and drives a speaker positioned in front of the listener and between the mains.

– The left surround (Ls) and right surround (Rs) channels drive the left and right speaker pair placed to the side or behind the listener (the "surrounds"). They typically handle sound effects or ambient sounds that create the aural illusion of a particular environment or space.

– The low frequency effects (LFE) channel delivers low-frequency special effects, e.g. explosions, and drives a higher power, restricted frequency speaker (a subwoofer), typically positioned in front of the listener.

The L, R, C, Ls, and Rs channels form the “5” part of 5.1 multi-channel audio. They create the overall surround sound experienceand handle the dialog and many special effects. They also exploitthe sound localization characteristics of the auditory system tocreate appropriately located phantom sound sources. Below 150 Hz, the sound localization cues become much less effective.The LFE channel (the ‘.1’ in 5.1 audio) has a relatively restricted rolein creating these dramatic, non-localized effects.

Although the speaker device is called a Subwoofer, in a surroundsound system it is referred to as a Low Frequency Effects channelbecause, depending on the size of the speaker system being usedby the viewer, the LFE will have different responses. For instance asystem with small satellite speakers will not have enough responseto provide all the bass sounds and in this case these sounds canbe directed to the LFE channel. In the other case of large speakersin the room, they have more dynamic range to allow them to carrythe lower frequency response of the bass sounds and there is lessneed to direct them to the LFE channel.

Figure 60. Multi-Channel Surround Sound Speaker Placement.

Page 57: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

53www.tektronix.com/video_audio

Continuing extensions to the multi-channel audio system add further channels to the configuration. Systems are now being usedwhich are 6.1 or 7.1 channel systems. In 6.1 channel audio, anadditional speaker is added to provide a mono back surroundchannel. In 7.1 audio systems two speakers are used to carrierthe mono back surround channel to the Left Rear Surround (Lb)and a Right Rear Surround (Rb). Additionally, it may be necessaryto monitor the down-mix of the multi-channel audio to a stereopair. This can be denoted as Lo-Ro for a standard stereo mix or asLt (Left-total) - Rt (Right-total) for a stereo down-mix which isDolby Pro-Logic™ encoded.

The surround sound display1

The surround sound display associates an audio level with each ofthe five primary channels in a 5.1 audio system by determining thechannel’s RMS signal level. It can compute an un-weighted RMSvalue or can apply a filter that produces a frequency-weightedRMS value. Applying this A-weighting filter adjusts for the frequency response of the human auditory system and yields an audio level value that better approximates the perceived loudness of the audio signal.

The display shows the audio level in the L, R, Ls, and Rs channelson four scales originating from the display center and orientedtoward the display corners. The upper left, upper right, lower left,and lower right corners of the display correspond to a 0 dB level in the L, R, Ls and Rs channels, respectively. The display centerrepresents -65 dBFS. As the signal level in a channel increases,the cyan-colored level indicator lengthens from the center towardsthe display corner for that channel. Each scale has marks at 10 dB intervals, with a mark at the user defined test alignmentlevel, typically defined at -20d BFS or -18 dBFS.

The display connects the ends of the audio level indicators to forma polygon called the Total Volume Indicator (TVI). The TVI indicatesthe level balance among the main and surround channels andgives an indication of the total surround sound balance. The TVIindicates the amount of correlation between signals in adjacentchannels using the following conventions.

– A straight line connecting the level indicators of two adjacent channels indicates that these channels have uncorrelated signals, i.e., a correlation value of 0.0.

– As the correlation between the two signals increases toward +1.0, the line connecting the level indicators bends outward, away from the center and towards the potential phantom sound source.

– As the signals move towards an out-of-phase condition, i.e., correlation values approach -1.0, the line bends inwards, towards the center, indicating the destructive interference and reduction in total sound volume associated with out-of-phase signals.

Figure 61 shows audio test signals applied to each of the inputs of the L, R, Ls and Rs channels. This forms an octagon shape ifthe signals have the same amplitude and frequency and so theoperator can quickly see if the channels are correctly aligned.

The center channel has a special role in a surround sound system.The surround sound display handles this channel differently. Thedisplay indicates the center channel audio level as a yellow verticalline positioned between the left and right channel audio level indicators. The display forms a Center Volume Indicator (CVI) by connecting the ends of the L and C level indicators and theends of the C and R level indicators. The TVI and CVI operate

Figure 61. Surround Sound Display with test tones for L, R, Ls and Rs.

1 Audio Surround Sound Display licensed from Radio-Technische Werkstätten GmbH & Co. KG (RTW) of Cologne, Germany.

Page 58: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

54 www.tektronix.com/video_audio

Phantom Source Indicators (PSIs) positioned around the perimeterof the display offer additional help in visualizing sound localization.Four PSIs placed on each side of the display indicate the nature of potential phantom sound sources formed by the L/R, L/Ls,Ls/Rs, and R/Rs adjacent channel pairs. These four PSIs operatein the same manner. Each PSI consists of a white tic mark, calledthe phantom source location pointer, which indicates the locationof a potential phantom sound source. A variable length lineextending on both sides of this location pointer indicates the listener’s ability to localize this source. If the signals in an adjacentchannel pair have a +1 correlation, they create a phantom soundsource in a precise location between the two speakers. The phantom source location pointer appears on the side associatedwith the adjacent channel pair.

The position of the white tic mark depends on the level relation-ship between the signals in the adjacent channel. A decrease incorrelation between signals in an adjacent channel pair introducessome uncertainty in the location of the associated phantom sound source.

To indicate this, the PSI becomes a variable-length line extendingfrom the white tic mark toward the display corners associated withthe channel pair. As an additional visual aid, the line changes coloras the correlation value crosses different threshold values.

For signal correlations above 0.9, the PSI is a very short white line,indicating a highly localized phantom sound source. For correlationvalues below 0.9, the line becomes green. It continues to lengthenon each side of the phantom source location pointer as the corre-lation decreases, indicating increasing uncertainty in the location of the phantom sound source. Once the line reaches a displaycorner, it will no longer lengthen with decreasing signal correlation.The location pointer remains in the position determined by thelevel balance between the adjacent channels. Consequently,unless the mark falls at the midpoint of a side, one end of the line will stop lengthening before the other.

For signal correlations below 0.2, the line turns yellow. When thesignals become fully uncorrelated, i.e., the correlation value equals0, the line will span the entire side of the display. This indicatesthat these adjacent channels will create a diffuse, ambient soundperception. Although the channel pair does not create a phantom

sound source, the white tic mark still indicates the level balancebetween the channels. A further decrease in the signal correlationtowards a -1 value does not change the length of the PSI or theposition of the phantom source location pointer. The PSI willchange color to red if the correlation falls below -0.3, indicating a possibly undesirable out-of-phase condition.

Figure 62 shows a live signal with dominate sound from the centerchannel compared to left and right channel. The L-R L-Ls and R-Rs have straight lines connecting them, indicating uncorrelatedsignal between the channels. There is a slight dominance in thesound between the front L-R and the surround channels Ls-Rs as shown by the stretching of the polygon shape. Also, the whitetic marks on each side of the surround sound display indicate thatL and R front speakers are currently carrying the more dominantsound. The connection line between the Ls and Rs channels is bending outwards and the PSI is a white tic mark between the channels indicating that these channels are correlated andidentical. The surround sound display is an intuitive interface toshow the interaction between the multiple channels in a surroundsound system.

Figure 62. Surround Sound Display with live audio signal.

Page 59: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

55www.tektronix.com/video_audio

Ancillary Data Today a variety of ancillary data can be carried within the blankinginterval. The addition of this ancillary data to the SDI signal allowsfor the same transport to carry associated data with the video signal and this data can be synchronized to the video. This ancillarydata allows for the carriage of up to 16 channels of embeddedaudio in HD-SDI and up to 32 channels in Dual Link and 3 Gb/sformats. Additional metadata can be carried within the stream thatprovides additional information associated with the video or audiosignals such as the Video Payload Identification, or Timecode.

SMPTE 291M defines the process for the format and location ofthis ancillary data within the SDI signal. There are two defined typesof ancillary data as shown in Figure 63.

The Ancillary Data Flag (ADF) is used to identify the start of theancillary data packet and uses the codeword 000h, 3FFh, 3FFh.This is the reverse of the code words used for EAV and SAV data.A Data Identification word (DID) is used to signify the type of databeing carried so that equipment can quickly identify the type ofdata present within the signal.

For type 1 ancillary data, the Data Block Number (DBN) signifiesthe count of this particular data series. For instance, this packethas a DBN of 12 then the next packet should have a DBN of 13,otherwise a data packet has been lost. This type 1 structure isused for embedded audio packets.

For the type 2 ancillary data there is a Secondary Data ID (SDID),which replaces the DBN that provides a wider range of allowed values and can be used for a series of data to be grouped, forinstance, the Dolby Vertical Ancillary (VANC) data has a series ofSDID to identify the audio channels the data is associated with.

Figure 63. Ancillary data types 1 and 2.

Page 60: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

56 www.tektronix.com/video_audio

The Data Count (DC) provides information on the number of UserData Words (UDW) within this ancillary data packet. The amount ofuser data that can be contained within the ancillary data packets isvariable up to a maximum of 255 words. Finally, a Checksum isadded to ensure the integrity of the data packet.

By using the data list display of the WFM7120, the user can lookthrough the data display to find the ancillary data packets whichare identified by the Ancillary Data Flag 000h, 3FFh, 3FFh. In thiscase, following the ADF are the values 241h and 101h that indicatethis is an SMPTE 352M Video Payload Identification. There are awide variety of ancillary data packets each with a unique DID andSDID (for type 2). SMPTE RP291 provides information on each ofthese Identifiers in use. Table 34 shows the values for type 1 andTable 35 shows the values for type 2.

Figure 64. Data list display of the WFM7120 showing the ancillary data of a SMPTE352M packet.

Standard Description DID Location

S291M Undefined Data 00h (200h) -

S291M Packet Marked 80h (180h) -for Deletion

S291M Start packet 88h (288h) -

S291M End Packet 84h (284h) -

RP165 Error Detection F4h (1F4h) VANCHandling

S272M SD Group 1 FFh (2FFh) HANCAudio Data Packet

S272M SD Group 2 Audio FDh (1FDh) HANCData Packet

S272M SD Group 3 FBh (1FBh) HANCAudio Data Packet

S272M SD Group 4 F9h (2F9h) HANCAudio Data Packet

S272M SD Group 1 FEh (1FEh) HANCExtended Audio

Data Packet

S272M SD Group 2 FCh (2FCh) HANCExtended Audio

Data Packet

S272M SD Group 3 FAh (2FAh) HANCExtended Audio

Data Packet

S272M SD Group 4 F8h (1F8h) HANCExtended Audio

Data Packet

S272M SD Group 1 Audio EFh (1EFh) HANCControl Packet

S272M SD Group 2 Audio EEh (2EEh) HANCControl Packet

S272M SD Group 3 Audio EDh (2EDh) HANCControl Packet

S272M SD Group 4 Audio ECh (1ECh) HANCControl Packet

S299M HD Group 1 Audio E7h (2E7h) HANCData Packet

S299M HD Group 2 Audio E6h (1E6h) HANCData Packet

S299M HD Group 3 Audio E5h (1E5h) HANCData Packet

S299M HD Group 4 Audio E4h (2E4h) HANCData Packet

S299M HD Group 1 E3h (1E3h) HANCAudio Control Packet

S299M HD Group 2 E2h (2E2h) HANCAudio Control Packet

S299M HD Group 3 E1h (2E1h) HANCAudio Control Packet

S299M HD Group 4 E0h (1E0h) HANCAudio Control Packet

S315M Camera Position F0h (2F0h) HANC or VANCInformation

Table 34. Ancillary identification codes for type 1.

Page 61: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

57www.tektronix.com/video_audio

Standard Description DID SDID DC

S291M Undefined Data 00h (200h) 00h (200h) xxx

S291M 8-Bit Application 04h (104h) 10h (110h) xxx

S291M Packet Marked for Deletion 80h (180h) 00h (200h) xxx

S291M Start Packet 88h (288h) 00h (200h) xxx

S291M End Packet 84h (284h) 00h (200h) 00h (200h)

S353 MPEG Recording Data 08h (108h) 08h (104h) 80h (180h)

S305M SDTI Transport 40h (140h) 01h (101h) 2Ah (22Ah)

S348 HD-SDTI Transport 40h (140h) 02h (102h) Variable

S Link Encryption Message 1 40h (140h) 04h (104h) -

S Link Encryption Message 2 40h (140h) 05h (205h) -

S Link Encryption Metadata 40h (140h) 06h (206h) -

S 352M Payload Identification 41h (241h) 01h (101h) 04h (104h)

S2016-3 AFD and Bar Data 41h (241h) 05h (205h) 08h (108h)

S2016-4 Pan Scan Data 41h (241h) 06h (206h) 60h (260h)

RP2010 ANSI/SCTE 104 Message 41h (241h) 07h (107h) Variable

S2031 DVB/SCTE VBI Data 41h (241h) 08h (108h) Variable

ITU-R Inter Station Control Data 43h (143h) 01h (101h) Variable

RDD8 Subtitling Distribution Packet 43h (143h) 02h (102h) Variable

RDD8 Transport of ANC Packet 43h (143h) 03h (203h) Variable

RP214 KLV Metadata VANC 44h (244h) 04h (104h) Variable

RP214 KLV Metadata HANC 44h (244h) 14h (214h) Variable

RP223 Package of UMID Data 44h (244h) 44h (144h) Variable

S2020-1 Compressed Audio Metadata 45h (145h) 01h (101h) Variable

S2020 Compressed Audio Metadata 45h (145h) 02h (102h) Variable

S2020 Compressed Audio Metadata 45h (145h) 03h (203h) Variable

S2020 Compressed Audio Metadata 45h (145h) 04h (104h) Variable

S2020 Compressed Audio Metadata 45h (145h) 05h (205h) Variable

S2020 Compressed Audio Metadata 45h (145h) 06h (206h) Variable

S2020 Compressed Audio Metadata 45h (145h) 07h (107h) Variable

S2020 Compressed Audio Metadata 45h (145h) 08h (108h) Variable

S2020 Compressed Audio Metadata 45h (145h) 09h (209h) Variable

RDD8 WSS Data 50h (250h) 01h (101h) Variable

RP215 Film Codes in VANC 51h (151h) 01h (101h) Variable

S12M Timecode (ATC) in VANC 60h (260h) 60h (260h) 10h (110h)

S334M Closed Caption (EIA-708-B) 61h (161h) 01h (101h) Variable

S334M Closed Caption (EIA 608) 61h (161h) 02h (102h) 03h (203h)

S334M Program Description (DTV) 62h (162h) 01h (101h) Variable

S334M Data Broadcast (DTV) VANC 62h (162h) 02h (102h) Variable

RP208 VBI Data in VANC Space 62h (162h) 03h (203h) Variable

RP196 Timecode (LTC) HANC 64h (164h) 64h (164h) 8h (108h)

RP196 Timecode (VITC) HANC 64h (164h) 7Fh (17Fh) 9h (209h)

Table 35. Ancillary identification codes for type 2.

Page 62: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

58 www.tektronix.com/video_audio

Figure 65. Ancillary data on the WFM7120: (a) ANC Data Inspector; (b) Watch List configuration.

(a) (b)

With this wide array of ancillary data, it can be difficult to find thesedata packets within the HD or SD signal. The Ancillary DataInspector available in the DAT option of the WFM7120 simplifiesthis task (Figure 65(a)). Users can create a Watch List in the configuration menu of the instrument which can check for a certaintype of ancillary data (Figure 65(b)). Alternatively, if no selection ismade then the Watch List will only show the present ancillary data

within the signal. The user can scroll through each of the availableancillary data types present. The Ancillary Data Display providesinformation on the type of ancillary data present, the location of the packet, Data Count and Checksum. Pressing MAG on theinstrument will allow the user to see a real-time view of the userdata words within the ancillary data packet.

Page 63: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

59www.tektronix.com/video_audio

For further analysis in deferred time, the user can use CaptureVu™

to capture a complete frame of the video signal that is then internallystored in the instrument. By selecting to use the Capture Buffer inthe Ancillary Data Inspector, the complete frame of data is loaded

into the display and the user can then search through all the ancillary data packets that were captured within the buffer. Figure66 shows the Capture Buffer data within the Ancillary DataInspector display.

Figure 66. Capture Buffer display using Ancillary Data Inspector.

Page 64: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

60 www.tektronix.com/video_audio

Each ancillary data packet has a specified location where the datacan reside within the SDI signal. In some cases, like embeddedaudio, this can be located in most of the Horizontal Ancillary(HANC) data space, or in other formats it can be a specific set of lines in the vertical ancillary (VANC) data space. Table 36

summarizes a few of the locations for ancillary data placement.Ancillary data should not reside in the vertical interval around theswitching point of the video signal. SMPTE RP 168 specifies theswitching point for each video format which is summarized in Table 37.

Format 525 525p 625 625p 1080i 1080p 720p

S 272M HANC -------- HANC -------- -------- -------- --------SD Embedded C’bY’C’rY* C’bY’C’rY’

Audio Not Present Not PresentLines 9, 10, Lines 5, 6,

272, 273 318, 319

S 299M -------- -------- -------- -------- HANC HANC HANCHD Embedded C’b/C’r only C’b/C’r only C’b/C’r only

Audio Not Present Not Present Not PresentLines 7, 569 Lines 7 Lines 7

RP 165 EDH Before SAV Line -------- Before SAV Line -------- -------- -------- --------9, 272 5, 318

S 352 After EAV After EAV After EAV After EAV After EAV After EAV After EAVLine 13, 273 Line 13 Line 9, 322 Line 9 Line 10, 572 Line 10 Line 10

S 334 The data packets are located in the active line portion of one or more lines in the vertical ancillary space. Data may be located in any lines in the area from the second line after the line specified

for switching to the last line before active video, inclusively.

RP 215 The data packets are located in the active line portion of one line in the vertical ancillary space. Data may be located in any lines in the area from the second line after the line specified for

to the last line before active video, inclusively. On segmented frame progressive formats, the filmtransfer descriptor data packet shall occur in the vertical blanking area at thestart of the frame.

RP 188 The preferred location for insertion of Ancillary Time Code (ATC) packets into a digital video signal stream shall be within the available ancillary space located within vertical blanking after the vertical interval

switching point defined in S RP 168 and before the beginning of active video.

RP 196 Transmission of HANC time code packets shall be at least once per frame for LTC data. Within the television frame/field, the LTC and VITC data words shall be transmitted between lines

10-20 for 525-line systems and 6-22 for 625-line systems.

Format 525 525p 625 625p 1080i 1080p 720p

Line Number 10, 273 10 6, 319 6 7, 569 7 7

Table 37. Vertical Interval Switching line numbers.

Table 36. ANC data placement.

Page 65: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

61www.tektronix.com/video_audio

Monitoring and measuring toolsWe know that digital television is a stream of numbers, and thismay lead to some unnecessary apprehension. Everything seems tobe happening really fast, and we need some help to sort everythingout. Fortunately, video, and especially the ancillary information supporting video, is quite repetitive, so all we need is the hardwareto convert this high-speed numeric data to something we canstudy and understand. Why not just convert it to something familiar, like analog video?

Digital video, either standard definition or the newer high-definitionstudio formats, is very much the same as its analog ancestor. Lots of things have improved with time, but we still make videowith cameras, from film, and today, from computers. The basic difference for digital video is the processing early in the chain thatconverts the analog video into numeric data and attaches ancillarydata to describe how to use the video data. For live cameras andtelecine, analog values of light are focused on sensors, which generates an analog response that is converted somewhere alongthe line to numeric data. Sometimes we can get to this analog signal for monitoring with an analog waveform monitor, but moreoften the video will come out of the equipment as data. In the caseof computer generated video, the signal probably was data fromthe beginning. Data travels from source equipment to destinationon a transport layer. This is the analog transport mechanism, oftena wire, or a fiber-optic path carrying the data to some destination.We can monitor this data directly with a high-bandwidth oscilloscope, or we can extract and monitor the data information as video.

Operationally, we are interested in monitoring the video. For this weneed a high-quality waveform monitor equipped with a standards-complaint data receiver to let us see the video in a familiar analogdisplay. Tektronix provides several digital input waveform monitorsincluding the WVR7120/7020/6020 series 1RU rasterizer (Figure67) for standard/high-definition component digital video and theWFM6120/7020/7120 (Figure 68) series 3RU half-rack monitorwhich is configurable for any of the digital formats in common use today.

Figure 67. WVR series standard and high-definition digital waveform monitors.

Figure 68. WFM6120/7020/7120 standard and high-definition digital waveform monitors.

Video Measurements

Page 66: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

62 www.tektronix.com/video_audio

Technically, we may want to know that the camera or telecine iscreating correct video data and that ancillary data is accurate. We may also want to evaluate the analog characteristics of thetransport layer. The Tektronix VM700T with digital option, theWFM6120 and WFM7120 allow in-depth data analysis and a directview of the eye-pattern shape of the standard definition transportlayer. The new WFM7120/6120 series high-definition monitors provide tools for both transport and data layer technical evaluation.

A test signal generator serves two purposes. It provides an idealreference video signal for evaluation of the signal processing andtransmission path, and it provides an example of the performanceyou should expect of today’s high-quality system components.Some generation equipment, such as the Tektronix TG700 signalgenerator platform shown in Figure 69, provides options for bothanalog and digital, standard and high-definition signal formats.

These tools allow an operator to generate video that is completelycompatible with the transmission system, video processingdevices, and finally with the end viewer’s display. Perhaps mostimportant, these tools provide an insight into the workings of the video system itself that increase technical confidence andawareness to help you do your job better.

Monitoring digital and analog signalsThere is a tendency to think of any video signal as a traditionaltime/amplitude waveform. This is a valid concept and holds forboth analog and digital. For analog video, the oscilloscope orwaveform monitor displays a plot of signal voltage as time progresses. The waveform monitor is synchronized to show thedesired signal characteristic as it occurs at the same horizontalposition on the waveform monitor display each time it occurs, horizontally in the line, or vertically in the field. A digital waveformmonitor shows the video information extracted from the incomingdata signal in the same manner as the analog waveform monitor.

You see the same information in the same way from the analog ordigital signals. For analog you see the direct signal; for digital yousee the signal described by the data. Operationally, you use themonitor to make the same video evaluations.

Additional measurements may be unique to the system being monitored. You may want to demodulate the NTSC or PAL colorinformation for display on an analog vectorscope. You may want to see an X vs. Y display of the color-difference channels of a digitalcomponent signal to simulate an analog vector display without creating or demodulating a color subcarrier. You may want toobserve the data content of a digital signal directly with a numericor logic level display. And you will want to observe gamut of theanalog or digital signal. Gamut is covered in greater detail inAppendix A – Gamut, Legal, Valid.

Assessment of video signal degradationSome of the signal degradations we were concerned with in analogNTSC or PAL are less important in standard definition componentvideo. Degradations become important again for even more basicreasons as we move to high-definition video. If we consider the real analog effects, they are the same. We sought signal integrity in analog to avoid a degradation of color video quality, but in high-definition we can start to see the defect itself.

Video amplitudeThe concept of unity gain through a system has been fundamentalsince the beginning of television. Standardization of video amplitude lets us design each system element for optimum signal-to-noise performance and freely interchange signals and signal paths. A video waveform monitor, a specialized form of oscilloscope, is used to measure video amplitude. When settinganalog video amplitudes, it is not sufficient to simply adjust the output level of the final piece of equipment in the signal path. Every piece of equipment should be adjusted to appropriatelytransfer the signal from input to output.

In digital formats, maintenance of video amplitude is even moreimportant. Adequate analog video amplitude into the systemassures that an optimum number of quantization levels are used inthe digitizing process to reproduce a satisfactory picture.Maintaining minimum and maximum amplitude excursions withinlimits assures the video voltage amplitude will not be outside therange of the digitizer. Aside from maintaining correct color balance,contrast, and brightness, video amplitude must be controlled withingamut limits legal for transmission and valid for conversion to othervideo formats. In a properly designed unity-gain video system,video amplitude adjustments will be made at the source and will be correct at the output.

Figure 69. Tektronix TG700 Signal Generator Platform.

Page 67: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

63www.tektronix.com/video_audio

In the analog domain, video amplitudes are defined, and the waveform monitor configured to a standard for the appropriate format. NTSC signals will be 140 IRE units, nominally one volt, from sync tip to white level. The NTSC video luminance range(Figure 70) is 100 IRE, nominally 714.3 mV, which may be reducedby 53.5 mV to include a 7.5 IRE black level setup. Depending oncolor information, luminance plus chrominance components mayextend below and above this range. NTSC sync is –40 IRE units,nominally –285.7 mV from blanking level to sync tip. The NTSCvideo signal is generally clamped to blanking level and the videomonitor is set to extinguish at black level.

PAL signals are also formatted to one-volt sync tip to white level,with a video luminance range of 700 mV, with no setup. PAL syncis –300 mV. The signal is clamped, and the monitor brightness setto extinguish at black level. Chrominance information may extendabove and below the video luminance range.

Video amplitude is checked on a stage-by-stage basis. An analogtest signal with low-frequency components of known amplitude(such as blanking and white levels in the color bar test signal) willbe connected to the input of each stage and the stage adjusted to replicate those levels at the output stage.

Regulatory agencies in each country, with international agreement,specify on-air transmission standards. NTSC, PAL, and SECAMvideo transmitters are amplitude-modulated with sync tip at peakpower and video white level plus chroma extending towards minimum power. This modulation scheme is efficient and reducesvisible noise, but is sensitive to linearity effects. Video levels mustbe carefully controlled to achieve a balance of economical full-power sync tip transmitter output and acceptable video signal distortion as whites and color components extend towards zerocarrier power. If video levels are too low, the video signal/noise ratiosuffers and electric power consumption goes up. If video levels are too high, the transmitter performs with greater distortion as the carrier nears zero power, and performance of the inter-carrier television audio receiver starts to fail.

Signal amplitudeIn an analog system, the signal between studio components is achanging voltage directly representing the video. An analog videowaveform monitor of the appropriate format makes it easy to viewthe voltage level of the analog video signal in relation to distinct timing patterns.

In a digital video system, the signal is a data “carrier” in the transport layer; a stream of data representing video information.This data is a series of analog voltage changes (Figures 71 and 72)that must be correctly identified as high or low at expected times toyield information on the content. The transport layer is an analogsignal path that just carries whatever is input to its destination. Thedigital signal starts out at a level of 800 mV and its spectral contentat half the clock frequency at the destination determines theamount of equalization applied by the receiver.

Figure 71. Correct 270 Mb/s data signal viewed with WFM7120.

Figure 70. Correctly adjusted composite video amplitude, NTSC, no setup.

Page 68: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

64 www.tektronix.com/video_audio

Digital signals in the transport layer can be viewed with a high-frequency oscilloscope or with a video waveform monitor such asthe Tektronix WFM7120/WFM6120 or WVR7120 with EYE optionfor either standard or high-definition formats. In the eye patternmode, the waveform monitor operates as an analog sampling oscilloscope with the display swept at a video rate. The equivalentbandwidth is high enough, the return loss great enough, and measurement cursors appropriately calibrated to accurately measure the incoming data signal. The rapidly changing data in the transport layer is a series of ones and zeros overlaid to create an eye pattern. Eye pattern testing is most effective whenthe monitor is connected to the device under test with a shortcable run, enabling use of the monitor in its non-equalized mode.With long cable runs, the data tends to disappear in the noise andthe equalized mode must be used. While the equalized mode isuseful in confirming headroom, it does not provide an accurateindicator of the signal at the output of the device under test. ThePHY option also provides additional transport layer informationsuch as jitter display and automated measurements of eye amplitude and provides a direct measurement readout of theseparameters.

Since the data transport stream contains components that changebetween high and low at rates of 270 Mb/s for standard definitionITU-R BT.601 component video, up to 1.485 Gb/s for some high-definition formats (SMPTE 292M), the ones and zeros will be overlaid (Figure 72) for display on a video waveform monitor.This is an advantage since we can now see the cumulative dataover many words, to determine any errors or distortions that mightintrude on the eye opening and make recovery of the data high orlow by the receiver difficult. Digital waveform monitors such as theTektronix WFM7120/6120 series for multiple digital formats providea choice of synchronized sweeps for the eye pattern display soword, line, and field disturbances may be correlated.

The digital video waveform display that looks like a traditional analog waveform (baseband video) is really an analog waveformrecreated by the numeric data in the transport layer. The digitaldata is decoded into high-quality analog component video that may be displayed and measured as an analog signal. Althoughmonitoring in the digital path is the right choice, many of the errors noted in digital video will have been generated earlier in the analog domain.

Figure 72. Development of the eye diagram.

Page 69: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

65www.tektronix.com/video_audio

Frequency responseIn an analog video system, video frequency response will be equalized where necessary to compensate loss of high-frequencyvideo information in long cable runs. The goal is to make eachstage of the system “flat” so all video frequencies travel through the system with no gain or loss. A multiburst test signal (Figure 73)can be used to quickly identify any required adjustment. If frequency packets in the multiburst signal are not the same amplitude at the output stage (Figure 74), an equalizing video distribution amplifier may be used to compensate, restoring themultiburst test signal to its original value.

In a digital system, high-frequency loss affects only the energy in the transport data stream (the transport layer), not the data numbers (the data layer) so there is no effect on video detail orcolor until the high-frequency loss is so great the data numberscannot be recovered. The equalizer in the receiver will compensateautomatically for high-frequency losses in the input. The systemdesigner will take care to keep cable runs short enough to achievenear 100% data integrity and there is no need for frequencyresponse adjustment. Any degradation in video frequency responsewill be due to analog effects.

Group delayTraditional analog video designs, for standard definition systems,have allowed on the order of 10 MHz bandwidth and have provided very flat frequency response through the 0-6 MHz rangecontaining the most video energy. Group-delay error, sometimesreferred to as envelope delay or frequency-dependent phase error,results when energy at one frequency takes a longer or shortertime to transit a system than energy at other frequencies, an effectoften associated with bandwidth limitations. The effect seen in thepicture would be an overshoot or rounding of a fast transitionbetween lower and higher brightness levels. In a composite NTSCor PAL television system, the color in the picture might be offset to the left or right of the associated luminance. The largest contributors to group-delay error are the NTSC/PAL encoder, the sound-notch filter, and the vestigial-sideband filter in the high-power television station transmitter, and of course the complementary chroma bandpass filters in the television receiver’s

Figure 73. Multiburst test signal with equal amplitude at each frequency, 1H display.

Figure 74. Multiburst with frequency response rolloff, 2H display.

Page 70: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

66 www.tektronix.com/video_audio

NTSC or PAL decoder. From an operational standpoint, most of theeffort to achieve a controlled group delay response centers in theanalog transmitter plant. It is routine, however, to check groupdelay, or phase error, through the analog studio plant to identifygross errors that may indicate a failure in some individual device.Group delay error in a studio plant is easily checked with a pulseand bar test signal (Figure 75). This test signal includes a half-sinusoidal 2T pulse and a low-frequency white bar with fast, controlled rise and fall times. A 2T pulse with energy at half the system bandwidth causes a low level of ringing which should besymmetrical around the base of the pulse. If the high-frequencyenergy in the edge gets through faster or slower than the low-frequency energy, the edge will be distorted (Figure 76). If high-frequency energy is being delayed, the ringing will occur later, onthe right side of the 2T pulse.

The composite pulse and bar test signal has a feature useful in the measurement of system phase response. In composite systemtesting, a 12.5T or 20T pulse modulated with energy at subcarrierfrequency is used to quickly check both chroma-luma delay andrelative gain at subcarrier frequency vs. a low frequency. A flatbaseline indicates that both gain and delay are correct. Any bowingupward of the baseline through the system indicates a lower gainat the subcarrier frequency. Bowing downward indicates highergain at the subcarrier frequency. Bowing upward at the beginningand downward at the end indicates high-frequency energy hasarrived later and vice versa. In a component video system, with nocolor subcarrier, the 2T pulse and the edge of the bar signal is ofmost interest.

A more comprehensive group delay measurement may be madeusing a multi-pulse or sin x/x pulse and is indicated when data,such as teletext or Sound-in-Sync is to be transmitted within thevideo signal.

Digital video system components use anti-alias and reconstructionfilters in the encoding/decoding process to and from the analogdomain. The cutoff frequencies of these internal filters are about5.75 MHz and 2.75 MHz for standard definition component videochannels, so they do react to video energy, but this energy is lessthan is present in the 1 MHz and 1.25 MHz filters in the NTSC orPAL encoder. Corresponding cutoff frequencies for filters in digitalhigh-definition formats are about 30 MHz for luma and 15 MHz for chroma information. The anti-alias and reconstruction filters in digital equipment are well corrected and are not adjustable operationally.

Non-linear effectsAn analog circuit may be affected in a number of ways as the videooperating voltage changes. Gain of the amplifier may be different at different operating levels (differential gain) causing incorrect colorsaturation in the NTSC or PAL video format. In a component analog format, brightness and color values may shift.

Figure 75. Correct 2T pulse, 1H MAG display. Figure 76. 2T pulse and bar, degraded.

Page 71: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

67www.tektronix.com/video_audio

Differential gainDifferential gain is an analog effect, and will not be caused or corrected in the digital domain. It is possible, however, that digitalvideo will be clipped if the signal drives the analog-to-digital converter into the range of reserved values. This gamut violation will cause incorrect brightness of some components and color shift. Please refer to Appendix A – Gamut, Legal, Valid.

Differential phaseTime delay through the circuit may change with the different videovoltage values. This is an analog effect, not caused in the digitaldomain. In NTSC this will change the instantaneous phase (differential phase) of the color subcarrier resulting in a color hueshift with a change in brightness. In the PAL system, this hue shiftis averaged out, shifting the hue first one way then the other fromline to line. The effect in a component video signal, analog or digital, may produce a color fringing effect depending on how many of the three channels are affected. The equivalent effect inhigh definition may be a ring or overshoot on fast changes inbrightness level.

Digital System Testing

Stress testingUnlike analog systems that tend to degrade gracefully, digital systems tend to work without fault until they crash. To date, thereare no in-service tests that will measure the headroom of the SDIsignal. Out-of-service stress tests are required to evaluate systemoperation. Stress testing consists of changing one or more parameters of the digital signal until failure occurs. The amount of change required to produce a failure is a measure of the head-room. Starting with the specifications in the relevant serial digitalvideo standard (SMPTE 259M or SMPTE 292M), the most intuitiveway to stress the system is to add cable until the onset of errors.Other tests would be to change amplitude or risetime, or add noiseand/or jitter to the signal. Each of these tests is evaluating one ormore aspects of the receiver performance, specifically automaticequalizer range and accuracy and receiver noise characteristics.Experimental results indicate that cable-length testing, in particularwhen used in conjunction with the SDI check field signals

described in the following sections, is the most meaningful stress test because it represents real operation. Stress testing the receiver’s ability to handle amplitude changes and added jitter are useful in evaluating and accepting equipment, but not too meaningful in system operation. Addition of noise or change in risetime (within reasonable bounds) has little effect on digital systems and is not important in stress tests.

Cable-length stress testingCable-length stress testing can be done using actual coax or acable simulator. Coax is the simplest and most practical method.The key parameter to be measured is onset of errors because thatdefines the crash point. With an error measurement method inplace, the quality of the measurement will be determined by thesharpness of the knee of the error curve. An operational check ofthe in-plant cabling can be easily done using the waveform monitor.This in-service check displays key information on the signal as itleaves the previous source and how it survives the transmissionpath. Figure 77 shows the effect of additional length of cable to the signal.

Figure 77. WFM7120 display of eye closure with long length of cable.

Page 72: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

68 www.tektronix.com/video_audio

SDI check fieldThe SDI Check Field (also known as a “pathological signal”) is afull-field test signal and therefore must be done out-of-service. It’s a difficult signal for the serial digital system to handle and is a veryimportant test to perform. The SDI Check Field is specified to havea maximum amount of low-frequency energy, after scrambling, intwo separate parts of the field. Statistically, this low-frequency energy will occur about once per frame. One component of the SDICheck Field tests equalizer operation by generating a sequence of19 zeros followed by a 1 (or 19 ones followed by 1 zero). Thisoccurs about once per field as the scrambler attains the requiredstarting condition, and when present it will persist for the full lineand terminate with the EAV packet. This sequence produces a high DC component that stresses the analog capabilities of theequipment and transmission system handling the signal. This partof the test signal may appear at the top of the picture display as ashade of purple, with the value of luma set to 198h, and both chroma channels set to 300h. The other part of the SDI Check Field signal is designed to check phase locked loop performancewith an occasional signal consisting of 20 zeros followed by 20ones. This provides a minimum number of zero crossings for clockextraction. This part of the test signal may appear at the bottom ofthe picture display as a shade of gray, with luma set to 110h andboth chroma channels set to 200h. Some test signal generators willuse a different signal order, with the picture display in shades of

green. The results will be the same. Either of the signal components(and other statistically difficult colors) might be present in computer-generated graphics so it is important that the system handle theSDI Check Field test signal without errors. The SDI Check Field is afully legal signal for component digital but not for the compositedomain. The SDI Check Field (Figure 78) is defined in SMPTERecommend Practice RP178 for SD and RP198 for HD.

In-service testingCRC (Cyclic Redundancy Coding) can be used to provide information to the operator or even sound an external alarm in the event data does not arrive intact. A CRC is present in eachvideo line in high-definition formats, and may be optionally insertedinto each field in standard definition formats. A CRC is calculatedand inserted into the data signal for comparison with a newly calculated CRC at the receiving end. For standard definition formats, the CRC value is inserted into the vertical interval, after the switch point. SMPTE RP165 defines the optional method forthe detection and handling of data errors in standard definitionvideo formats (EDH Error Detection Handling). Full Field and Active Picture data are separately checked and a 16-bit CRC word generated once per field. The Full Field check covers all datatransmitted except in lines reserved for vertical interval switching(lines 9-11 in 525, or lines 5-7 in 625 line standards). The ActivePicture (AP) check covers only the active video data words,

Figure 78. SDI Check Field “Pathological Test Signal.”

Page 73: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

69www.tektronix.com/video_audio

between but not including SAV and EAV. Half-lines of active videoare not included in the AP check. Digital monitors may provide botha display of EDH CRC values and an alarm on AP or FF (Full Field)CRC errors (Figure 79). In high-definition formats, CRCs for lumaand chroma follow EAV and line count ancillary data words. TheCRC for high-definition formats is defined in SMPTE 292M to followthe EAV and line number words, so CRC checking is on a line-by-line basis for Y-CRC and C-CRC. The user can then monitor thenumber of errors they have received along the transmission path.Ideally, the instrument will show zero errors indicating an error-freetransmission path. If the number of errors starts to increase, theuser should start to pay attention to the increase in errors. As theerrors increase to one every hour or minute, this is an indicationthat the system is getting closer to the digital cliff. The

transmission path should be investigated further to isolate thecause of the error before the system reaches the digital cliff and itbecomes difficult to be able to isolate the error within the path.Figure 80 shows the video session display of the WFM7120 andthe accumulated CRC errors are shown for both the Y and C channels of the high-definition signal. Within the display, not onlyare the number of errors counted, but the errors are also displayedin relation to the number of fields and seconds of the time intervalbeing monitored. Resetting the time will restart the calculation andmonitoring of the signal. If significant CRC errors start to be seen,the transmission path should be investigated further by using theeye and jitter displays. If errors occur every minute or every second,the system is approaching the digital cliff and significant CRC errorswould be seen in the display.

Figure 80. Video Session screen of HD signal showing CRC values on WFM7120.Figure 79. Video Session screen of SD signal showing EDH values on WVR7120.

Page 74: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

70 www.tektronix.com/video_audio

Eye-pattern testingThe eye pattern (Figures 81) is an oscilloscope view of the analogsignal transporting the data. The signal highs and lows must bereliably detectable by the receiver to yield real-time data withouterrors. The basic parameters measured with the eye-pattern display are signal amplitude, risetime, and overshoot. Jitter canalso be measured with the eye pattern if the clock is carefully specified. The eye pattern is viewed as it arrives, before any equalization. Because of this, most eye-pattern measurements will be made near the source, where the signal is not dominated by noise and frequency rolloff. Important specifications includeamplitude, risetime, and jitter, which are defined in the standards,SMPTE259M, SMPTE292, and RP184. Frequency, or period, isdetermined by the television sync generator developing the source signal, not the serialization process. A unit interval (UI) is defined as the time between two adjacent signal transitions,which is the reciprocal of clock frequency. The unit interval is 3.7 ns for digital component 525 and 625 (SMPTE 259M) and 673.4ps for Digital High Definition (SMPTE 292M). A serial receiver determines if the signal is a “high” or a “low” in the center of eacheye, thereby detecting the serial data. As noise and jitter in the signal increase through the transmission channel, certainly the best decision point is in the center of the eye (as shown in Figure82). Some receivers select a point at a fixed time after each transition point. Any effect which closes the eye may reduce the usefulness of the received signal. In a communications systemwith forward error correction, accurate data recovery can be made with the eye nearly closed. With the very low error rates

required for correct transmission of serial digital video, a ratherlarge and clean eye opening is required after receiver equalization.This is because the random nature of the processes that close the eye have statistical “tails” that would cause an occasional, but unacceptable error. Allowed jitter is specified as 0.2 UI. This is 740 ps for digital component 525 and 625 and 134.7 ps for digital high definition. Digital systems will work beyond this jitterspecification, but will fail at some point. The basics of a digital system are to maintain a good-quality signal to keep the systemhealthy and prevent a failure which would cause the system to falloff the edge of the cliff. Signal amplitude is important because of its relationship to noise, and because the receiver estimates therequired high-frequency compensation (equalization) based on thehalf-clock-frequency energy remaining as the signal arrives.Incorrect amplitude at the sending end could result in an incorrect equalization being applied at the receiving end, causing signal distortions. Rise-time measurements are made from the 20% to80% points as appropriate for ECL logic devices. Incorrect rise time could cause signal distortions such as ringing and overshoot,or if too slow, could reduce the time available for sampling withinthe eye. Overshoot will likely be caused by impedance discontinuties or poor return loss at the receiving or sending terminations. Effective testing for correct receiving end terminationrequires a high-performance loop-through on the test instrument to see any defects caused by the termination under evaluation.Cable loss tends to reduce the visibility of reflections, especially at

Figure 81. Eye-pattern display of data signal in the analog transport layer.

Figure 82. Data recovery of serial signal.

Page 75: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

71www.tektronix.com/video_audio

high-definition data rates of 1.485 Gb/s and above. High-definitiondigital inputs are usually terminated internally and in-service eye-pattern monitoring will not test the transmission path (cable)feeding other devices. Out-of-service transmission path testing isdone by substituting a test signal generator for the source, and awaveform monitor with eye-pattern display in place of the normalreceiving device. Eye-pattern testing requires an oscilloscope with a known response well beyond the transport layer data rate and isgenerally measured with sampling techniques. The TektronixVM700T, WVR7120, and WFM7120/WFM6120 provide eye-patternmeasurement capability for standard definition (270 Mb/s data) andthe WVR7120 or WFM7120 allows eye-pattern measurements onhigh-definition 1.485 Gb/s data streams. These digital waveformmonitors provide several advantages because they are able toextract and display the video data as well as measure it. Thesampled eye pattern can be displayed in a three-data-bit overlay (3 Eye mode), to show jitter uncorrelated to the 10-bit/20-bit dataword, or the display can be set to show ten bits for SD signals ortwenty bits for high-definition signals of word-correlated data. Bysynchronizing the waveform monitor sweep to video line and fieldrates, it is easy to see any DC shift in the data stream correlated tohorizontal or vertical video information.

Understanding certain characteristics of the eye display can help in troubleshooting problems within the path of the signal. Propertermination within an HD-SDI system is even more critical becauseof the high clock rate of the signal. Improper termination will mean

that not all of the energy will be absorbed by the receiving termination or device. This residual energy will be reflected back along the cable creating a standing wave. These reflectionswill produce ringing within the signal and the user will observe overshoot and undershoots on the eye display as shown in Figure83. Note that this termination error by itself would not cause aproblem in the signal being received. However, this error addedcumulatively to other errors along the signal path will narrow theeye opening more quickly and decrease the receiver’s ability torecover the clock and data from the signal.

The eye display typically has the cross point of the transition in themiddle of the eye display at the 50% point. If the rises time or falltime of the signal transitions are unequal then the eye display willmove away from the 50% point depending on the degree ofinequality between the transitions. AC-coupling within a device willshift the high signal level closer to the fixed-decision thresholdreducing noise margin. Typically, SDI signals have symmetric riseand fall times, but asymmetric line drivers and optical signalsources (lasers) can introduce non-symmetric transitions as shownin Figure 84. While significant, these source asymmetries do nothave especially large impacts on signal rise and fall times. In partic-ular, cable attenuation will generally have a much larger impact onsignal transition times. Without appropriate compensation or otheradjustments, asymmetries in SDI signals can reduce noise marginswith respect to the decision threshold used in decoding and canlead to decoding errors.

Figure 83. Unterminated Eye display on the WFM7120. Figure 84. Non-symmetrical eye display on WFM7120.

Page 76: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

72 www.tektronix.com/video_audio

Adding lengths of cable between the source and the measurementinstrument results in attenuation of the amplitude and frequencylosses along the cable producing longer rise and fall time of thesignal. With increase in cable length, the eye opening closes and isno longer clearly visible within the display. However, this signal isstill able to be decoded correctly because the equalizer is able torecover the data stream. When the SDI signal has been degradedby using a long length of cable as in Figure 77, the eye opening isno longer clearly visible. In this case the equalized eye mode on the WFM7120/WFM6120 will allow the user to observe the eyeopening after the equalizer has performed correction to the signalas shown in Figure 85. Therefore, it is likely that a receiver with asuitable adaptive equalizer will be able to recover this signal.However, it should be remembered that not all receivers use thesame designs and there is a possibility that some device may still not be able to recover the signal. If the equalizer within theinstrument is able to recover data, the equalized eye display shouldbe open. If this display is partially or fully closed then the receiver isgoing to have to work harder to recover the clock and data. In thiscase there is more potential for data errors to occur in the receiver.Data errors can produce sparkle effects in the picture, line dropouts or even frozen images. At this point, the receiver at the end ofthe signal path is having problems extracting the clock and datafrom the SDI signal. By maintaining the health of the physical layerof the signal, we can ensure that these types of problems do notoccur. The Eye and Jitter displays of the instrument can helptroubleshoot these problems.

Jitter testingSince there is no separate clock provided with the video data, asampling clock must be recovered by detecting data transitions.This is accomplished by directly recovering energy around theexpected clock frequency to drive a high-bandwidth oscillator (i.e.,a 5 MHz bandwidth 270 MHz oscillator for SD signals) locked innear-real-time with the incoming signal. This oscillator then drives aheavily averaged, low-bandwidth oscillator (i.e., a 10 Hz bandwidth

270 MHz oscillator for SD signals). In a jitter measurement instru-ment, samples of the high- and low-bandwidth oscillators are thencompared in a phase demodulator to produce an output waveformrepresenting jitter. This is referred to as the “demodulator method.”Timing jitter is defined as the variation in time of the significantinstances (such as zero crossings) of a digital signal relative to a jitter-free clock above some low frequency (typically 10 Hz). Itwould be preferable to use the original reference clock, but it is notusually available, so the heavily averaged oscillator in the measure-ment instrument is often used. Alignment jitter, or relative jitter, isdefined as the variation in time of the significant instants (such aszero crossings) of a digital signal relative to a hypothetical clockrecovered from the signal itself. This recovered clock will track inthe signal up to its upper clock recovery bandwidth, typically 1 kHzfor SDI and 100 kHz for HD signals. Measured alignment jitterincludes those terms above this frequency. Alignment jitter showssignal-to-latch clock timing margin degradation.

Figure 85. Equalized Eye mode on the WFM7120.

Page 77: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

73www.tektronix.com/video_audio

Tektronix instruments such as the WFM6120 (Figure 86), WFM7120and VM700T provide a selection of high-pass filters to isolate jitterenergy. Jitter information may be unfiltered (the full 10 Hz to 5 MHzbandwidth) to display Timing Jitter, or filtered by a 1 kHz (–3 dB)high-pass filter to display 1 kHz to 5 MHz Alignment Jitter.Additional high-pass filters may be selected to further isolate jittercomponents. These measurement instruments provide a directreadout of jitter amplitude and a visual display of the demodulatedjitter waveform to aid in isolating the cause of the jitter. It is quitecommon for a data receiver in a signal path to tolerate jitter consid-erably in excess of that specified by SMPTE recommendations butthe build-up of jitter (jitter growth) through multiple devices couldlead to unexpected failure. Jitter in bit-serial systems is discussedin SMPTE RP184, EG33, and RP192.

Jitter within the SDI signal will change the time when a transitionoccurs and cause a widening of the overall transition point. This jitter can cause a narrowing or closing of the eye display and make the determination of the decision threshold more difficult. It is only possible to measure up to one unit interval of jitter within the eye display by the use of cursors manually or by automated measurement readout. It can also be difficult within the eye display to determine infrequently occurring jitter eventsbecause the intensity of these events will be more difficult toobserve compared to the regular repeatable transitions within the SDI signal.

Within the WFM7120 and WFM6120 EYE option, a jitter readout isprovided within the eye display. The readout provides a measure-ment in both unit intervals and time. For an operational environmenta jitter thermometer bar display provides simple warning of an SDIsignal exceeding a jitter as shown in Figure 87. When the bar turnsred it can alert the user to a potential problem in the system. Thisthreshold value is user selectable and can be set at a defined limitby the user.

Figure 87. Jitter Bar Measurement.

Figure 86. Jitter display mode in two-field sweep on WFM7120.

Page 78: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

74 www.tektronix.com/video_audio

Jitter Display with 10 Hz Filter

Figure 88. Jitter display with different filter selections.

Jitter Display with 100 Hz Filter

Jitter Display with 100 kHz FilterJitter Display with 1 kHz Filter

Page 79: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

75www.tektronix.com/video_audio

To characterize different types of jitter, the jitter waveform displayavailable with the PHY option on the WFM6120 and WFM7120allows a superior method to investigate jitter problems within thesignal than the eye display and jitter readout. The jitter waveformcan be displayed in a one-line, two-line, one-field or two-field dis-play related to the video rate. When investigating jitter within thesystem it is useful to select the two-field display and increase thegain within the display. A small amount of jitter is present within all systems but the trace should be a horizontal line. Increasing thegain to ten times will show the inherent noise within the system.This should be random in nature, if not then there is likely to be a deterministic component of jitter present within the signal.

Within the instrument, one can apply 10 Hz, 100 Hz, 1 kHz, 10kHz and 100 kHz filters for the measurement. These can aid in the isolation of jitter frequency problems. In this example as shown inFigure 88, different filters were used and the direct jitter readoutand jitter waveform display are shown. With the filter set to 10 Hzthe measurement of jitter was 0.2UI and there are disturbances tothe trace at field rates. There are also some occasional verticalshifts in the trace when viewed on the waveform display. This givesrise to larger peak to peak measurements than actually measuredfrom the display itself. When a 100 Hz filter is applied some of the components of jitter are reduce and the vertical jumping of thetrace is not present, giving a more stable display. The measurementnow reads 0.12UI, the disturbances at field rate are still presenthowever. Application of the 1 kHz reduces the components of jitter and the trace is more of a flat line, the presence of the disturbances at field rate can still be observed and are still present.The jitter readout did not drop significantly between the 100 Hz

and 1 kHz filter selections. With the 100 kHz filter applied the display now shows a flat trace and the jitter readout is significantlylower at 0.07UI. In this case, the output of the device is within normal operating parameters for this unit and provides a suitablesignal for decoding of the physical layer. Normally, as the band-pass gets narrower and the filter selection is increased, you willexpect the jitter measurement to become smaller as in this case.Suppose that as the filter value is increased and the band-passbandwidth narrowed that the jitter readout actually increased. What would this mean was occurring in the SDI signal?

In this case, an explanation of these measurement results couldindicate that a pulse of jitter was present within the signal and thispulse of jitter was within the band-pass edge of one of the filterselections. Instead of this component being removed by the filterselection it was actually differentiated, producing a ringing at the rising and falling transitions of the pulse producing a larger value of jitter within the bandwidth filter selection.

By use of these filter selections, the user can determine withinwhich frequency band the jitter components are present. Most of the frequency components present will be multiples of the line or field rate and can be helpful in understanding which devices produce significant amounts of jitter within the SDI transmissionpath. Typically, the Phase Lock Loop (PLL) design of the receiverwill pass through low frequency of jitter from input to output of thedevice as the unit tracks the jitter present within the input to thedevice. High-frequency jitter components are more difficult for thePLL to track and can cause locking problems in the receiver.

Page 80: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

76 www.tektronix.com/video_audio

SDI status displayThe SDI Status display provides a summary of several SDI physicallayer measurements as shown in Figure 89. Within the WFM7120/6120 and WVR7120 with the Eye option it is possible to configurethe two jitter readouts to show Timing and Alignment jitter simultaneously by configuring tiles one and two for Timing jitter and tiles three and four for Alignment jitter. The instruments willautomatically change the filter setting for alignment jitter betweenHD (100kHz) and SD (1kHz) depending on the type of SDI signalapplied. Additionally, a cable-length estimation measurement bar isalso shown within the SDI Status display. If the unit has the PHYoption, automatic measurements of eye amplitude, eye risetime,eye falltime and eye rise-fall are made by the instrument. Theseautomatic measurement provide a more accurate and reliablemethod of measuring the physical layer.

Cable-length measurementsThe cable-length measurement is useful to quantify equipmentoperational margin within a transmission path. Most manufacturersspecify their equipment to work within a specified range using acertain type of cable. For instance [Receiver Equalization Range -Typically SD: to 250m of type 8281 cable HD: to 100m of type8281 cable.] As shown in this example the cable type specified is8281 cable. However, throughout your facility a different type ofcable may be used. In this case, set the waveform monitor to thecable type specified by the equipment manufacturer and thenmeasure the cable length. If the reading from the instrument is 80meters, we known that this piece of equipment will work to at least100 meters and have 20 meters of margin within this signal path. If the measurement was above 100 meters then we would haveexceed the manufacturers recommendation for the device.Manufacturers specify their equipment to one of the most popularcable types and it is not necessary to have that specific type ofcable used in your facility when making this measurement. TheWFM7120 and WFM6120 support the following cable types whichare typically used within specifications (Belden 8281, 1505, 1695A,1855A, Image 1000 and Canare L5-CFB). Simply select the appropriate cable type from the configuration menu for the

physical layer measurement. Once the cable type has been selected, apply the SDI signal to the instrument and it will providemeasurements of Cable Loss, Cable Length and Estimated SourceSignal Level.

– Cable Loss shows the signal loss in dB (deciBels) along the cable length. The value of 0dB indicates a good 800mV signal whereas a value of -3dB would indicates a source with 0.707 of the expected amplitude. If we assume that the launch amplitude of the signal was 800mv then the amplitude of the signal at the measurement location would be approximately 565mv.

– Cable Length indicates the length of the cable between the source signal and the waveform monitor. The instrument calculates the cable length based on the signal power at the input and the type of cable selected by the user.

– Source Level shows the calculated launch amplitude of the signalsource, assuming a continuous run of cable, based on the specified type of cable selected by the user.

Figure 89. SDI Status display from WFM7120.

Page 81: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

77www.tektronix.com/video_audio

These types of measurements can be particularly useful when qualifying a system and verify equipment performance. By knowingthe performance specification of the equipment, the user cangauge if the device is operating within the allowable range. Forinstance, if the instrument is measuring 62 meters for the cablelength of the signal as shown in Figure 89, then the user can compare this measurement with the operating margin for theequipment which stated that the equalization range of the devicewill operate to at least 100m of Belden 8281 cable. Therefore, thesignal path has 38 meters of margin for the operation of thisdevice. Remember that this measurement assumes a continuousrun of cable. In some case this measurement may have been madewith a number of active devices within the signal path. If this is thecase then each link in turn should be measured separately with a test signal source applied at one end of the cable and the measurement device at the other end. This will give a more reliableindication of the measurement of cable length within each part ofthe system and ensure that the system has sufficient headroombetween each signal path. If the transmission distance exceeds the maximum length specified by the cable manufacturer, thenadditional active devices need to be inserted within the signal path to maintain the quality of the signal.

Timing between video sourcesIn order to transmit a smooth flow of information, both to the viewer and to the system hardware handling the signal, it is necessary that any mixed or sequentially switched video sourcesbe in step at the point they come together. Relative timing betweenserial digital video signals that are within an operational range foruse in studio equipment may vary from several nanoseconds to a few television lines. This relative timing can be measured by synchronizing a waveform monitor to an external source and comparing the relative positions of known picture elements.

Measurement of the timing differences in operational signal pathsmay be accomplished using the Active Picture Timing Test Signalavailable from the TG700 Digital Component Generator in conjunction with the timing cursors and line select of an externallyreferenced WFM6120 or WFM7120 series serial component waveform monitor. The Active Picture Timing Test Signal will have a luminance white bar on the following lines:

525-line signals: Lines 21, 262, 284, and 525

625-line signals: Lines 24, 310, 336, and 622

1250-, 1125-, and 750-line formats: first and last active lines

of each field

To set relative timing of signal sources such as cameras, telecines,or video recorders, it may be possible to observe the analog representation of the SAV timing reference signal, which changesamplitude as vertical blanking changes to active video. The waveform monitor must be set to “PASS” mode to display an analog representation of the timing reference signals, and belocked to an external synchronizing reference (EXT REF).

Figure 90. Interchannel timing measurement using green/magenta transition.

Page 82: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

78 www.tektronix.com/video_audio

Interchannel timing of component signalsTiming differences between the channels of a single componentvideo feed will cause problems unless the errors are very small.Signals can be monitored in the digital domain, but any timingerrors will likely be present from the original analog source. Sinceanalog components travel through different cables, different

amplifiers in a routing switcher, etc., timing errors can occur if the equipment is not carefully installed and adjusted. There are several methods for checking the interchannel timing of componentsignals. Transitions in the color bar test signal can be used with the waveform method described below. Tektronix componentwaveform monitors, however, provide two efficient and accuratealternatives: the Lightning display, using the standard color bar testsignal; and the bowtie display, which requires a special test signalgenerated by Tektronix component signal generators.

Waveform method The waveform technique can be used with an accurately calibratedthree-channel waveform monitor to verify whether transitions in allthree channels are occurring at the same time. For example, acolor bar signal has simultaneous transitions in all three channels atthe boundary between the green and magenta bars (Figure 91).

To use the waveform method to check whether the green-magentatransitions are properly timed:

1. Route the color bar signal through the system under test and connect it to the waveform monitor.

2. Set the waveform monitor to PARADE mode and 1 LINE sweep.

3. Vertically position the display, if necessary, so the midpoint of the Channel 1 green-magenta transition is on the 350 mV line.

4. Adjust the Channel 2 and Channel 3 position controls so thezero level of the color-difference channels is on the 350 mVline. (Because the color-difference signals range from –350 mV to +350 mV, their zero level is at vertical center.)

5. Select WAVEFORM OVERLAY mode and horizontal MAG.

6. Position the traces horizontally for viewing the proper set oftransitions. All three traces should coincide on the 350 mV line.

The Tektronix TG700 and TG2000 test signal generators can beprogrammed to generate a special reverse bars test signal, with thecolor bar order reversed for half of each field. This signal makes iteasy to see timing differences by simply lining up the crossoverpoints of the three signals. The result is shown in Figure 91.

Timing using the Tektronix Lightning displayThe Tektronix Lightning display provides a quick, accurate check of interchannel timing. Using a color bar test signal, the Lightningdisplay includes graticule markings indicating any timing errors.Each of the Green/Magenta transitions should pass through thecenter dot in the series of seven graticule dots crossing its path.Figure 92 shows the correct timing.

Figure 91. TG700 reverse color bar signal, H MAG, OVERLAY.

Figure 92. Lightning display for a 100% color bar signal.

Page 83: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

The closely spaced dots provide a guide for checking transitions.These dots are 40 ns apart while the widely spaced dots represent80 ns. The electronic graticule eliminates the effects of CRT nonlinearity. If the color-difference signal is not coincident withluma, the transitions between color dots will bend. The amount of this bending represents the relative signal delay between lumaand color-difference signal. The upper half of the display measuresthe Pb to Y timing, while the bottom half measures the Pr to Y timing. If the transition bends in towards the vertical center of theblack region, the color-difference signal is delayed with respect toluma. If the transition bends out toward white, the color-differencesignal is leading the luma signal.

Bowtie methodThe bowtie display requires a special test signal with signals ofslightly differing frequencies on the chroma channels than on theluma channel. For standard definition formats, a 500 kHz sine-wavepacket might be on the luma channel and a 502 kHz sine-wavepacket on each of the two chroma channels (Figure 93). Other frequencies could be used to vary the sensitivity of the measurement display.

Higher packet frequencies may be chosen for testing high-definitioncomponent systems. Markers generated on a few lines of the lumachannel serve as an electronic graticule for measuring relative timing errors. The taller center marker indicates zero error, and theother markers are spaced at 20 ns intervals when the 500 kHz and502 kHz packet frequencies are used. The three sine-wave packetsare generated to be precisely in phase at their centers. Because ofthe frequency offset, the two chroma channels become increasinglyout of phase with the luma channel on either side of center.

The waveform monitor subtracts one chroma channel from theluma channel for the left half of the bowtie display and the secondchroma channel from the luma channel for the right half of the display. Each subtraction produces a null at the point where thetwo components are exactly in phase (ideally at the center). A relative timing error between one chroma channel and luma, for

example, changes the relative phase between the two channels,moving the null off center on the side of the display for that channel. A shift of the null to the left of center indicates the color-difference channel is advanced relative to the luma channel. Whenthe null is shifted to the right, the color-difference signal is delayedrelative to the luma channel.

The null, regardless of where it is located, will be zero amplitudeonly if the amplitudes of the two sine-wave packets are equal. A relative amplitude error makes the null broader and shallower, making it difficult to accurately evaluate timing. If you need a goodtiming measurement, first adjust the amplitudes of the equipmentunder test. A gain error in the luma (CH1) channel will mean neitherwaveform has a complete null. If the gain is off only in Pb (CH2),the left waveform will not null completely, but the right waveformwill. If the gain is off only in Pr (CH3) the right waveform will not nullcompletely, but the left waveform will.

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

79www.tektronix.com/video_audio

Figure 93. Bowtie test signal.

Page 84: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

80 www.tektronix.com/video_audio

The bowtie test signal and display offers two benefits; it providesbetter timing resolution than the waveform and Lightning methods,and the display is readable at some distance from the waveformmonitor screen.

Note that the bowtie test signal is an invalid signal, legal only incolor-difference format. It becomes illegal when translated to RGBor composite formats and could create troublesome side effects inequipment that processes internally in RGB. (The concept of legaland valid signals is discussed in Appendix A – Gamut, Legal, Valid.

The bowtie test method can be used to evaluate relative amplitudes and relative timing using component waveform monitorssuch as the Tektronix 1765 VM700T option 30, WFM601 Series,and WFM7120/6120 series which have bowtie display modes.

The left side of the display (Figure 94) compares Y and Pb; the right side compares Y and Pr. The 5 ns advance of the Pr componentvs. Y is generally acceptable.

To use the bowtie display, route the signal from the componentgenerator through the equipment under test and connect it to thewaveform monitor. Activate the BOWTIE display. If the bowtie patterns have a sharp null, and the null is at the center of each line, the relative amplitudes and interchannel timing are correct.Interchannel timing errors will move the position of the null (Figure95). A relative amplitude error (Figure 96) will decrease the depth of the null. An incomplete null combined with an offset from centerindicates both amplitude and timing problems between the channels being compared.Figure 96. Bowtie display Pr gain error vs. Y.

Figure 94. Y & Pb Left and Y & Pr right. Figure 95. Bowtie display, Pb delayed 55ns, Pr advanced 50ns Y.

Page 85: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

81www.tektronix.com/video_audio

RGB and color-difference waveformsAlthough the colorist will make equipment adjustments in the familiar red, green, blue format, the engineer may wish to see ananalog representation of the signal matrixed for digital encoding.The digital signal is usually a direct quantization and time multiplexof the luma, or Y' signal, and the two chroma components, C'band C'r. These three digital components can be converted to analog and directly displayed as a color-difference waveformparade, or matrixed back to red, green, and blue for the colorist.Examples of the two display formats are shown in Figure 97 andFigure 98.

Component gain balanceIn a component signal, gain balance refers to the matching of levelsbetween channels. If any of the components has an amplitudeerror relative to the others, it will affect the hue and/or saturation in the picture. Since in color-difference formats, different colorscontain different signal amplitudes from the red, green, and bluechannels, it is not always obvious how individual channel gainsshould be adjusted. Several displays have been developed to helpthe operator make these adjustments.

The vector displayThe vector display (Figure 99) has long been used for monitoringchrominance amplitude in composite NTSC or PAL systems. Whenthe demodulation phase is adjusted correctly, usually by the operator, to place the color synchronizing burst pointing left alongthe horizontal axis, the composite vector display is a Cartesian (x,y)graph of the two decoded color components. Demodulated R-Y on the vertical axis and B-Y on the horizontal axis.

Operating a Digital Television System

Figure 97. WFM7120 R'G'B' parade display of 100% color bars. Figure 98. WFM7120 Y'/C'b/C'r display of 100% color bars.

Figure 99. NTSC vectorscope display.

Page 86: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

82 www.tektronix.com/video_audio

A similar display (Figure 100) for digital or analog component systems can be formed by plotting P'r or C'r on the vertical axisand P'b or C'b on the horizontal axis (Figure 101). Internal gains and display graticule box positions are adjusted in the monitoringinstrument’s design so the plot will fit the boxes for the chosenamplitude of color bars. If either color component has the wrongamplitude, the dots they produce will not fall in the graticule boxes.For example, if the P'r or C'r gain is too high, the dots will fallabove the boxes in the top half of the screen and below the boxesin the bottom half. Either 75% or 100% color bars may be used.When taking measurements, make certain the source signal amplitude matches the vector graticule.

The polar display permits measurement of hue in terms of the relative phase of the chroma signal. Amplitude of the chroma signal is the displacement from center towards the color point. The transitions from one point to another also provide useful timinginformation. These timing differences appear as looping or bowingof the transitions, but can more easily be measured using Lightningor bowtie methods.

The two-axis vector display is convenient for monitoring or adjusting the set of two color-difference components, but makesno provision for evaluating luma gain or for making chroma/lumagain comparisons. The vector display would look the same if theluma channel were completely missing.

Figure 100. Component vector display.

Figure 102. The Tektronix Lightning display.

Figure 101. Development of the component vector display.

Page 87: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

83www.tektronix.com/video_audio

The Lightning displayRecognizing that a three-dimensional method would be desirablefor monitoring the complete set of component signals, Tektronixdeveloped a display (Figure 102) that provides both amplitude andinterchannel timing information for the three-signal channels on a single display. The only test signal required for definitive measurements is standard color bars.

The Lightning display is generated by plotting luma vs. P'b or C'bin the upper half of the screen and inverted luma vs. P'r or C'r inthe lower half (Figure 103) – like two vector displays sharing the

same screen. The bright dot at the center of the screen is blanking level (signal zero). Increasing luma is plotted upward to the upper half of the screen and downward in the lower half. Ifluma gain is too high, the plot will be stretched vertically. If P'r orC'r gain is too high (Figure 104), the bottom half of the plot will bestretched horizontally. If P'b or C'b is too high, the top half of thedisplay will be stretched horizontally. The display also providesinterchannel timing information by looking at the green/magentatransitions. When the green and magenta vector dots are in theirboxes, the transition should intercept the center dot in the line ofseven timing dots.

Figure 103. Development of the Tektronix Lightning display.

Figure 104. Lightning display with P'r gain error.

Page 88: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

84 www.tektronix.com/video_audio

The Diamond displayThe Tektronix Diamond display (Figure 105) provides a reliable method of detecting invalid colors before they show up in a finished production. Color is usually developed and finally displayed in R'G'B' format. If it were handled through the systemin this format, monitoring to detect an illegalsignal would be quite simple – just ensure thatthe limits are not exceeded. But most studiosystems use a Y', C'b, C'r format for datatransmission and processing, and the signalis often converted to PAL or NTSC for on-airtransmission. Ultimately, all color video signalsare coded as RGB for final display on a picture monitor.

The Tektronix Diamond display is generated by combining R', G', and B' signals. If thevideo signal is in another format, the components are converted toR', G', and B' which can be converted into a valid and legal signal in any format that can handle 100% color bars. (A notable exception is the NTSCtransmission standard where regulatory agencies have set thewhite level too close to zero RF carrier to accommodate 100%color bars. See Arrowhead display.)

The upper diamond (Figures 105 and 106) is formed from thetranscoded signal by applying B'+G' to the vertical axis and B'–G'to the horizontal axis. The lower diamond is formed by applying

–(R'+G') to the vertical axis and R'–G' to the horizontal axis. Thetwo diamonds are displayed alternately to create the double diamond display. 1.5 MHz (standard definition, wider for high definition) low-pass filters are applied to each to eliminate the short-term out-of-limit signals that are usually the product of combining different bandwidth signals in color-difference formats.

To predictably display all three components, they must lie betweenpeak white, 700 mV, and black 0 V (Figure 107). Picture monitorshandle excursions outside the standard range (gamut) in differentways. For a signal to be in gamut, all signal vectors must lie withinthe G-B and G-R diamonds. If a vector extends outside the

Figure 105. Tektronix Diamond display of 75% color bars. Figure 106. Development of the Diamond display, upper half.

Figure 107. Diamond display of legal color space.

Page 89: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

85www.tektronix.com/video_audio

diamond, it is out of gamut. Errors in green amplitude affect bothdiamonds equally, while blue errors only affect the top diamond andred errors affect only the bottom diamond. Timing errors can beseen using a color bar test signal as bending of the transitions. Inthe Diamond display, monochrome signals appear as vertical lines.However, excursions below black can sometimes be masked in theopposite diamond. Therefore, it can be useful to split the diamondinto two parts to see excursions below black in either of the G-B orG-R spaces.

By observing the Diamond display, the operator can be certain thevideo components being monitored can be translated into legaland valid signals in RGB color space. The Diamond display can beused for live signals as well as test signals.

The Arrowhead displayNTSC transmission standards will not accommodate 100% colorbars, so you cannot be sure video that appears to be correct in theR', G', B' format can be faithfully transmitted through an amplitude-modulated NTSC transmitter. Traditionally, the signal had to beencoded into NTSC and monitored with an NTSC waveform monitor. The Tektronix Arrowhead display (Figures 108, 109, and110) provides NTSC and PAL composite gamut information directly from the component signal.

The Arrowhead display plots luminance on the vertical axis, withblanking at the lower left corner of the arrow. The magnitude ofthe chroma subcarrier at every luminance level is plotted on thehorizontal axis, with zero subcarrier at the left edge of the arrow.The upper sloping line forms a graticule indicating 100% color

bar total luma + subcarrier amplitudes. The lower sloping graticule indicates a luma + subcarrier extending towards sync tip (maximum transmitter power). The electronic graticuleprovides a reliable reference to measure what luminance pluscolor subcarrier will be when the signal is later encoded intoNTSC or PAL. An adjustable modulation depth alarm capabilityis provided to warn the operator that the composite signal maybe approaching a limit. The video operator can now see howthe component signal will be handled in a composite transmis-sion system and make any needed corrections in production.

Figure 109. NTSC Arrowhead graticule values.Figure 108. Tektronix Arrowhead display, 75% component color bars for NTSC.

Figure 110. PAL Arrowhead graticule values.

Page 90: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

86 www.tektronix.com/video_audio

How to Monitor GamutGamut monitoring is important to do during the post productionprocess and during evaluation and ingest of the program materialinto the broadcast facility. The original video material can gothrough a variety of format and color space conversions as it isprocessed, from HD RGB to HD YPbPr to SD YPbPr to composite.Each of these conversions has different valid ranges for the colorspace and allowed voltage levels of the signal. A simple illustrationof this is to look at 100% color bars in both high definition andstandard definition for the YPbPr signal, as shown in Figure 111.Notice the difference in levels of the signals, especially the green tomagenta transition. If you were only familiar with standard definitionyou might consider the high-definition signal needs adjustment butthis is not the case. The different colorimetry equations, as shownin Table 1 give rise to the different video levels of the two signals.Both signals are correctly aligned. Remember that high-definitionsignals typically use colorimetry equations based on ITU BT-R709,and standard definition signal use colorimetry equations based onITU BT-R601 (SMPTE125M). These sets of equations give rise

to the differences in the video levels of the color bar signal.Transposing them from one video format to another can introduceartifacts into the video image.

During the post production process it is important to ensure thatthe video image, graphics and titles are produced within gamutthroughout all the variety of formats and color conversions processes the signal goes through. It is important to ensure thatthe color fidelity of material is maintained. For instance, if a deepmagenta background is produced in RGB for the graphic used inan high-definition production. This signal can go through a varietyof color space conversions during the post production process.The color fidelity of the image could be lost, when it is finally broad-cast as a composite signal and the background image will appearas a muddy magenta because the video process has caused theimage to become distorted. The conversion, from the original RGBcolor, could not be carried in that form through the video chain andresults in the final color not being representative of the originalintentions.

Figure 111. 100% color bar signal YPbPr for HD. 100% color bar signal YPbPr for SD.

Page 91: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

87www.tektronix.com/video_audio

The waveform monitor can be used to identify gamut errors withinthe video signal. There are several approaches to take dependingon the processing of the video signal.

Simple gamut alarm indications are given in the status bar displayof the instrument. The type of errors can be identified by viewingthe video session display. Lowercase and uppercase letters indicate which gamut limits have been exceeded. For instance,Figure 112 shows the status bar with RGB and Composite gamuterrors highlighted in red. Viewing the video sessions display showsR---Bb. The uppercase letters “R---B” show the upper limit ofgamut has been exceeded for red and blue and the lowercase letter “b” shows that the lower gamut limit has been exceeded for the blue channel. By using the split Diamond display on theWVR7120, the user could simply adjust the gain of the red andblue components to bring the high amplitudes signals within theblue dashed bounding box limits of the display by using a colorcorrector. Adjusting the lower black level control for the blue channel will adjust the lower limits of the blue channel within the

dashed bounding box limits of the display. During this process theuser should monitor the adjustment they are making to ensure theydo not introduce any other gamut violations into the signal. Theseerrors would also be indicated in the status bar and within thevideo session display. The picture display can be enabled by theuser to highlight the region where the gamut errors are occurring by the hashed area, shown in Figure 112 of the picture display.

A similar process can be done for the composite gamut errors. Inthis case, the Luma signal is within limits. But the uppercase “C”indicates the upper limit of chroma has been exceeded. By usingthe Arrowhead display, the user can adjust the proc-amp controlsto bring the waveform trace within the bounding box of the display.The user needs to ensure that during this adjustment they do notintroduce other errors into the signal. The digital processing of theinstrument allows the user to make composite gamut adjustmentswithout the need for the signal to be converted into the analogcomposite domain.

Figure 112. WVR7120 showing gamut errrors.

Page 92: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

88 www.tektronix.com/video_audio

Tape Quality Assurance (QA) is an important part of the videochain. Broadcasters and program providers typically have a specification which provides details on the requirements that theprogram should meet in order to be compliant, and to maintain thelimits set by the broadcasters or program provider. These specifica-tions can contain requirements on the position of logos, titles andgraphics relating to the artistic look of the program, but can alsocontain technical requirements on video and audio levels. There iscurrently a European document (EBU-R103) which provides aspecification of the RGB gamut and Luma limits for the video signal(Figure 113). The latest Tektronix waveform monitors and rasterizershave a predefined limit for EBU-R103 specifications which sets thelimits for the gamut parameters. This specification allows a +/-5%threshold for RGB gamut (735mv to -35mv) with an area limit of1% and 103% and -1% limit for the luma signal. The thresholds arecompletely configurable within the instrument. Different specifica-tions for gamut can be selected by the user to conform to theappropriate producer or broadcaster requirements. Note EBU R103does not specify limits for composite gamut.

Definition of Limits

Description Allowed Range EBU R103 limits Tek Defaults

Diamond High This sets the level above which RGB 756mv 735mv 721mv components are considered too large 630mv and out of gamut.

Diamond Low This sets the level below which RGB +35mv -35mv -21mv components are considered too negative -50mv and out of gamut.

Area Specifies the percentage of the total image 0-10% 1% 0% pixels that can be outside the current gamut limits, yet not be reported as a gamut error.

Arrowhead NTSC Min This sets the minimum allowed level for a NTSC -10IRE -24IREcomposite signal derived from the SDI signal. -50IRE

Arrowhead NTSC Max This sets the maximum allowed level for a NTSC 135IRE 120IRE composite signal derived from the SDI signal. 90IRE

Arrowhead PAL Min This sets the minimum allowed level for a PAL -100mv -230mvcomposite signal derived from the SDI signal. -400mv

Arrowhead PAL Max This sets the maximum allowed level for a PAL 950mv 930mv composite signal derived from the SDI signal. 630mv

Arrowhead Area Specifies the percentage of the total image pixels 0-10% 0% that can be outside the current gamut limits, yet not be reported as a gamut error.

Luma Min This sets the minimum allowed level for Luminance +5% -1% -1% on the incoming SDI and on a composite signal -6%l derived from the SDI signal.

Luma Max This sets the maximum allowed level for Luminance 108% 103% 103% on the incoming SDI and on a composite signal 90% derived from the SDI signal.

Luma Area Specifies the percentage of the total image pixels 0-10% 1% 0% that can be outside the current luma limits, yet not be reported as an error.

Figure 113. Gamut Limits for EBU-R103 on the WVR7120.

Table 39.

Page 93: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

89www.tektronix.com/video_audio

Once the limits to be used have been defined, it is important toensure that the appropriate alarms are selected for gamut andother conditions. These alarms will alert the user to problems withinthe material. These alarms can trigger a variety of conditions(Screen/Text icon, Beep, Log, SNMP or Ground closure) as shownin Figure 114. Ensuring that the log is selected will provide an errorlog of when these conditions occurred related either to the internalclock of the instrument or to timecode if present. The logging oferrors related to timecode allows the error occurrence to be moreeasily located in the program material. The error log can be down-loaded via the network connection from the instrument to a com-puter, which allows the log to be printed or attached to a report.

ConclusionIt has been the goal of this primer to provide background informa-tion on the transition of the television studio from analog to digitaland high-definition video formats.

Today's video professional faces many challenges and the transition to digital should be one of those providing a great long-term return. The typical broadcaster and production studio willoperate in both standard and high-definition video formats. Thenew digital formats, natural extensions of familiar analog video, offer a superior channel for the video professional’s creativity, ahigher level of performance and reliability for the engineer, and anew, exciting viewing experience for the consumer that will continue the industry’s growth and success.

There will be many changes in your future. The authors hope you find the transition from analog to digital video among the most rewarding.

Figure 114. Video Content alarm configuration.

Page 94: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

90 www.tektronix.com/video_audio

Appendix A – Color and ColorimetryThe television color specification is based on standards defined by the CIE (Commission Internationale de L’Éclairage) in 1931. This system is based on experiments with a group of observersmatching a color to an additive mix of three primaries – red, greenand blue. The average of this experiment results in a graph thatshows the color matching function (Figure A1) of a standard (average) observer. RGB tristimulus values are restricted by gamutrestraint and cannot produce all colors. In order to produce the fullrange of colors, negative values of RGB would be required. This isan inappropriate model for television colorimetry. The CIE specifiedan idealized set of primary XYZ tristimulus values. These values are a set of all-positive values converted from the RGB tristimulusvalues where the value Y is proportional to the luminance of theadditive mix. This specification is used as the basis for color withintoday's video standards.

The CIE standardized a procedure for normalizing XYZ tristimulusvalues to obtain a two-dimensional plot of values x and y of all colors for a relative value of luminance as specified by the followingequations. A color is plotted as a point in an (x, y) chromaticity diagram, illustrated in Figure A2.

x = X / (X + Y + Z) y = Y / (X + Y + Z) z = Z / (X + Y + Z) 1 = x + y + z

Limits are defined for various video formats that show all possiblecolors for that format. Color-coded triangles (SMPTE = yellow,EBU/PAL/SECAM = blue, NTSC 1953 = green) in Figure A3 arespecified by x, y coordinates in Table A1.

The x, y coordinates chosen are dependent on the phosphors usedin manufacture of the CRT. NTSC phosphors specified in 1953have been superceded by those of EBU and SMPTE because ofthe requirement for brighter displays.

Figure A1. CIE 1931 Color matching function (2 degree observer).

Figure A2. CIE x y Chromaticity with coordinate values for SMPTE, EBU/PAL/SECAM, and NTSC 1953.

Page 95: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

91www.tektronix.com/video_audio

WhiteAn important consideration in the definitionof colors is the white point of the system,and therefore, within each format a whitepoint is defined which is the addition of red,green, and blue in equal quantities.

The CIE defined several standard sources in 1931:

Source A: A tungsten-filament lamp with

a color temperature of 2854K

Source B: A model of noon sunlight with

a color temperature of 4800K

Source C: A model of average daylight

with a color temperature of 6504K

Illuminant C (Source C) was used in theoriginal definition of NTSC. The CIE laterdefined a series of daylight illuminants calledthe Daylight D series. Illuminant D65 with acolor temperature of 6504K, and slightly different x, y coordinates are predominatelyused with video standards today.

Each of the sources has a white point andis given a x, y value on the chromaticity diagram.

Illuminant A x = 0.4476 y = 0.4075Illuminant B x = 0.3484 y = 0.3516Illuminant C x = 0.3101 y = 0.3162Illuminant D65 x = 0.3127 y = 0.3290

Current standards assume the television studio is illuminated by a source with Illuminant D65. In practice, studio lighting may not be Illuminant D65 and adjusting the gain of the red, green, blue components will compensate the white balance of the camera.

Red, green, and blue componentsComponents in some form are a necessary part of any color television system. Color cameras usually analyze the light in theimage to develop video signals for three primary colors: red, green,and blue. Since each of these gamma-corrected R'G'B' signalscarries part of the information in the image, and all are required torecreate a complete image, they are referred to as “components” of the color video signal. As in the more generic use of the term,each component is a necessary, but not sufficient, part of thewhole. The basic R'G'B' component signals are used again at theoutput of a television system to display the image on a monitor orTV set. Therefore, it makes sense to say that one of the primarytasks of a television plant is to convey these component signals

SMPTE RED GREEN BLUE

Xr Yr Xg Yg Xb Yb

0.630 0.340 0.310 0.595 0.155 0.070

Illuminant D65 x = 0.3127 y = 0.3290

EBU RED GREEN BLUE

Rec 709 Xr Yr Xg Yg Xb Yb

0.640 0.330 0.300 0.600 0.150 0.060

Illuminant D65 x = 0.3127 y = 0.3290

PAL/SECAM RED GREEN BLUE

Xr Yr Xg Yg Xb Yb

0.64 0.330 0.290 0.60 0.150 0.060

Illuminant D65 x = 0.3127 y = 0.3290

NTSC RED GREEN BLUE

(1953) Xr Yr Xg Yg Xb Yb

0.670 0.330 0.210 0.710 0.140 0.080

Illuminant C x = 0.3101 y = 0.3162

Table A1. CIE x, y Coordinate Values for Various Formats.

Page 96: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

92 www.tektronix.com/video_audio

to convey these component signals through all the distribution,technical, and artistic processes and deliver them to a display for viewing. Although some equipment, especially in the past, distributed RGB signals beyond the camera (or camera controlunit), video has almost always been translated or encoded intoother formats for recording, interconnection, or long-distance transmission, then decoded for display.

Another means of representing Red, Green, and Blue primary colors is by a three-dimensional R'G'B' color cube representation.All colors can be represented within the bounds of the RGB colorcube as shown in Figure A3.

The color television system was developed to be compatible withexisting black and white television receivers. The gamma-correctedluma signal, Y', is created from the red, green, and blue camerasignals, for transmission to black and white or color receivers as a monochrome picture. By knowing the difference between themonochrome or luma channel and any two color channels, we can recover red, green, and blue todrive the color picture tube. Sincehuman vision green response mostclosely tracks brightness, a majority ofthat color information is used to makeup the luma signal, and the remainingred and blue color-difference channelscan be transmitted at a lower band-width.

The luma signal and the two color-difference signals contain all the information needed to display any ofthe broad range of colors possible inthe original image. The basic set ofthree components (R', G', and B') isthus translated to a new set of threecomponents (Y', R'-Y', B'-Y') by a simple matrix as shown in Figure A4.The color-difference component formhas two advantages over R'G'B'. First,substantially less bandwidth is requiredto convey necessary information: acolor-difference system needs only onehigh-bandwidth channel because all thefine detail in the image is carried by theluma signal. An R'G'B' system, on theother hand, requires high bandwidth in all three channels. Second, gain distortions have less severe effects on a color-difference component set than on R'G'B'. A low level on any one channel in

Figure A4. Matrix of R'G'B' signal-to-color difference.

Figure A3. R'G'B' color cube.

Page 97: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

93www.tektronix.com/video_audio

a color-difference set will produce subtlechanges in hue or changes in saturationonly. A low level in R'G'B', however, willproduce a distinctly wrong-coloredimage. The concept of transcodingR'G'B' to one luma and two color-difference signals has proven very useful.Such signals, with relatively minor variations, are the basis for all existingcomponent video formats and also forcomposite broadcast standards throughout the world.

For standard definition (Figure A5):

Y' = 0.587G' + 0.114B' + 0.299R'value ranges between 0 to 700 mVSync – 300 mV

B'-Y' = –0.587G' + 0.866B' – 0.299R' value ranges between ±620 mV

R'-Y' = –0.857G' – 0.114B' + 0.701R' value ranges between ±491 mV

In the component video domain, component R'G'B' signals are oftenreferred to as G'B'R' because the majority of the luminance signal is made up of green channel information.Therefore, there is a correspondencebetween Y'P'bP'r and G'B'R'.

Color-difference values (Figure A5) arefirst scaled to produce an equal dynamicrange of ±350 mV for ease of processingwithin various systems. The analog component signal is denoted Y'P'bP'rand the digital component system, which introduces an offset to the color-difference signals to allow similarprocessing ranges for the Y and color-difference signals values, is denoted Y'C'bC'r.

Figure A5. Color-difference signals scaled and offset for digital quantizing.

Y' = 0.587G' + 0.114B' + 0.299R'value ranges between 0 to700 mV Sync – 300 mV.

P'b = 0.564 (B'-Y') value rangebetween ±350 mV

P'r = 0.713 (R'-Y') value rangebetween ±350 mV

Y' = 0.587G' + 0.114B' + 0.299R'value ranges between 0 to700 mV Sync – 300 mV.

C'b = 0.564 (B'-Y') + 350 mV valuerange between 0 to 700 mV

C'r = 0.713 (R'-Y') + 350 mV valuerange between 0 to 700 mV

Page 98: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

94 www.tektronix.com/video_audio

Performing this matrixing and scaling prevents all possible values of Y'C'bC'r signals being used when the signal is converted backto RGB. As illustrated in Figure A6, only about 25% of all possiblesignal values in the Y'C'bC'r domain are used to present the entiregamut of colors in the RGB domain. Because of this, care must betaken when translating between formats that the dynamic range isnot exceeded in the conversion process.

Gamut, legal, validThe term gamut has been used to refer to the range or gamut of reproducible colors by a television system when the scene is illuminated by a reference white (illuminant D65 for NTSC/PAL). This gamut is defined by the chromaticity value or CIE chromaticitycoordinates for a given system. This range of colors of variable saturation is reproduced in the picture monitor by red, green, andblue or R'G'B' signal values. When equal valued, (i.e., R' = G' = B')the image is colorless to the extent it represents shades of gray ona properly-adjusted picture monitor. Otherwise, a colored hue ofnonzero saturation results and all colors in the gamut of repro-ducible colors are possible by independently adjusting the values of the R'G'B' signals.

Since the values of the R'G'B' signals directly represent these colors, the term gamut is often used to refer to the range of colorsrepresented by all combinations of R'G'B' signals that lie within thelegal limits of 0 and 700 mV. R'G'B' signals extending outside thisvoltage range may produce desirable color on a given picture monitor, but are outside the valid color gamut. They may be clipped or compressed in subsequent signal processing, distortingthe color when displayed on another picture monitor.

So in the R'G'B' domain, any channel exceeding either the upperor lower limit represents an invalid signal, since the color falls out-side the valid color gamut. It is also illegal since one or more of thecomponents exceeds the legal limits.

Legal signals are simply those signals that do not violate the signal-voltage limits for the particular format in use, i.e., signals within theallowed signal limits for that format. So a legal signal in a color-difference format like Y'C'bC'r can be invalid in that it can representa color outside the valid color gamut. Such an invalid signal willalways produce an illegal signal when transcoded to R'G'B'.

Figure A6. Y'C'bC'r 3D color space.

Page 99: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

95www.tektronix.com/video_audio

Figure A8. An invalid signal, legal in one format but illegal when converted.Figure A7. A valid color-difference signal can be converted to a legal RGB signal.

A valid signal is one that is within color gamut and remains legalwhen translated into any other format. A valid signal is always legal,but a legal signal is not necessarily valid. The latter case most oftenoccurs with a color-difference format component signal, where thesignal levels are not independent, as they are in RGB systems.Figures A7 and A8 show how a simple gain distortion in a color-difference component signal can make the signal invalid, thoughnot illegal.

Figure A7 shows a legal and valid color-difference signal (top) andthe legal RGB signal (bottom) to which it translates. In Figure A8,however, the luma channel of the color-difference signal (top) is distorted; it has a relative gain of only 90 percent. When this distorted signal is transcoded to the RGB format (bottom), theresult is an illegal signal – all three components extend below theminimum allowed signal level. Since the distorted color-differencesignal cannot be translated into a legal RGB signal, it is invalid.Other forms of distortion can also create invalid signals.

Valid signals can be translated, encoded, or input to any part of avideo system without causing amplitude-related problems.

Page 100: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

96 www.tektronix.com/video_audio

Format conversion tablesThe following conversion tables show howtranslation between Y'P'bP'r values andG'B'R' products can be calculated. In Table A2, the values of 100% color barsare translated from G'B'R' into Y'P'bP'r.

The dynamic range of R'G'B' (0 to 700 mV) is not exceeded and the conversionprocess results in signals that do notexceed the analog dynamic range ofY'P'bP'r (0 to 700 mV for the luma channel and ±350 mV for the color-difference channels). This signal is said to be Legal and Valid. A signal is Legal if it falls within the dynamic range of that format. A signal is Valid if it represents acolor that is within the valid color gamut.Such a signal, when transcoded to R'G'B'will always produce an R'G'B' signal that is legal.

When a signal exceeds the dynamic rangeof a format, it becomes illegal. Table A3shows signals which are legal in theY'P'bP'r domain; however, when these values are converted to G'B'R', some of the values fall outside of the 0 to 700 mVthreshold set for G'B'R' indicating that they are invalid and represent colors outside the valid gamut. Distortion of thesignals could likely occur by processingequipment which is expected to onlyprocess the signal within the specified format range and may clip the signal if it exceeds these values. Tektronix has developed specific displays to assist operators and engineers in maintainingLegal and Valid signals.

Y' P'b P'r G' B' R'(mV) (mV) (mV) (mV) (mV) (mV) Color

700 350 350 330 1320 1911 Illegal GBR

700 –350 –350 1070 80 160 Illegal GBR

700 0 350 450 700 1191 Illegal GBR

700 0 –350 950 700 160 Illegal GBR

700 350 0 580 1320 700 Illegal GBR

700 –350 0 820 80 700 Illegal GBR

700 0 0 700 700 700 White

0 350 350 –370 620 491 Illegal GBR

0 –350 –350 370 –620 491 Illegal GBR

0 0 350 –250 0 491 Illegal GBR

0 0 –350 250 0 –491 Illegal GBR

0 350 0 –120 620 0 Illegal GBR

0 –350 0 120 –620 0 Illegal GBR

0 0 0 0 0 0 Black

100% Color Bars

G' B' R' Y' P'b P'rColor (mV) (mV) (mV) (mV) (mV) (mV)

White 700 700 700 700 0 0

Yellow 700 0 700 620.2 –350 56.7

Cyan 700 700 0 490.7 118.3 –350

Green 700 0 0 410.9 –231.7 –293.3

Magenta 0 700 700 289.1 231.7 293.3

Red 0 0 700 209.3 –118.3 350

Blue 0 700 0 79.8 350 –56.7

Black 0 0 0 0 0 0

Table A3. Legal but Invalid Y'P'bP'r Signals with Equivalent Illegal G'B'R' Signals.

Table A2. Legal and Valid G'B'R' Signal with Equivalent Legal and Valid Y'P'bP'r Signal.

Page 101: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

97www.tektronix.com/video_audio

Appendix B – Television Clock Relationships

Figure B1. Video clock derivations.

Page 102: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

98 www.tektronix.com/video_audio

Figure C1. PAL and NTSC system horizontal interval. Figure C2. SECAM system horizontal interval.

PAL B/G NTSC SECAM PAL-M PAL-N

Sync Type negative negative negative negative negativebi-level bi-level bi-level bi-level bi-level

Subcarrier Freq. (MHz) 4.43361875 3.579545 4.406250 3.57561149 3.582056254.250000

Lines/Frame 625 525 625 525 625

Field Freq. (Hz) 50.00 59.94 50.00 59.94 50.00

Line Freq. (kHz) 15.625 15.734264 15.625 15.734264 15.625

Line Period (µ) 64.000 63.555 64.000 63.555 64.000

Line Blanking (µ) 12.05 10.90 12.05 10.90 12.05

Back Porch (µ) 5.8 4.7 5.8 4.7 5.8

Front Porch (µ) 1.55 1.50 1.55 1.50 1.55

Sync Width (µ) 4.7 4.7 4.7 4.7 4.7

Sync Amplitude (mV) –300 –286 –300 –286 –300

Sync Amplitude (IRE) –43 –40 –43 –40 –43

Sync Rise/Fall (µ) 0.200 0.250 0.200 0.250 0.200

Sync to Burst (µ) 5.6 5.3 – 5.8 5.6

Burst Duration (µ) 2.25 ±0.28 2.23 to 3.11 – 2.25 ±0.28 2.51 ±0.28

Burst Duration 10 ±1 9 ±1 – 9 ±1 9 ±1(Cycles of SC)

Burst Ampl. (mV) 300 286 166 286 300

Field Period (µ) 20 16.6833 20 16.6833 20

Field Blanking (lines) 25 21 25 21 25

Table C1. Standard Definition Composite Video Parameters.

Appendix C – Standard Definition Analog Composite Video Parameters

Page 103: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

99www.tektronix.com/video_audio

ANSI S4.40-1992, Digital Audio Engineering - Serial TransmissionFormat for Two-Channel Linearly Represented Digital AudioData (AES-3)

ANSI/SMPTE 125M-1995, Television - Component Video Signal4:2:2 - Bit-Parallel Digital Interface

ANSI/SMPTE 170M-2004, Television - Composite Analog VideoSignal - NTSC for Studio Applications

ANSI/SMPTE 240M-1999 (Archive 2004), Television - Signal Parameters - 1125-Line High-Definition Production Systems

ANSI/SMPTE 259M-2008, Television - SDTV Digital Signal/DataSerial Digital Interface

SMPTE 260M-1999 (Archive 2004), Television - DigitalRepresentation and Bit - Parallel Interface - 1125/60 High-DefinitionProduction System

ANSI/SMPTE 272M-2004, Television - Formatting AES/EBU Audioand Auxiliary Data into Digital Video Ancillary Data Space

ANSI/SMPTE 274M-2008, Television - 1920 x 1080 Scanning andAnalog and Parallel Digital Interfaces for Multiple Picture Rates

ANSI/SMPTE 291M-2006, Television - Ancillary Data Packet andSpace Formatting

ANSI/SMPTE 292M-2008, Television - 1.5 Gb/s Signal/ Data Serial Interface

ANSI/SMPTE 293M-2003, Television - 720 x 483 Active Line at59.94-Hz Progressive Scan Production - Digital Representation

ANSI/SMPTE 294M-2001, Television - 720 x 483 Active Line at59.94-Hz Progressive Scan Production - Bit-Serial Interfaces

ANSI/SMPTE 295M-1997, Television - 1920 x 1080 50 Hz -Scanning and Interface

ANSI/SMPTE 296M-2001, Television - 1280 x 720 Scanning, Analogand Digital Representation and Analog Interface

ANSI/SMPTE 299M-2004, Television - 24-Bit Digital Audio Format forHDTV Bit-Serial Interface

ANSI/SMPTE 305M- 2005, Television - Serial Data TransportInterface (SDTI)

ANSI/SMPTE 310M- 2004, Television - Synchronous Serial Interfacefor MPEG-2 Digital Transport Stream

SMPTE 318M-1999 - Synchronization of 59.94 or 50 Hz RelatedVideo and Audio Systems in Analog and Digital Areas -Reference Signals

ANSI/SMPTE 320M- 1999, Television - Channel Assignments andLevels on Multichannel Audio Media

ANSI/SMPTE 346M- 2000 (Archive 2006), Television - Time DivisionMultiplexing Video signals and Generic Data over High-DefinitionTelevision Interfaces

ANSI/SMPTE 347M- 2001 (Archive 2006), Television - 540 Mb/sSerial Digital Interface - Source Image Format Mapping

ANSI/SMPTE 348M- 2005 (Archive 2006), Television - High Datarate Serial Data Transport Interface (HD-SDTI)

ANSI/SMPTE 349M- 2001 (Archive 2006), Television - Transport ofAlternate Source Image Formats through SMPTE292

ANSI/SMPTE 352M- 2002, Television - Video Payload Identificationfor Digital Television Interfaces

ANSI/SMPTE 372M- 2002, Television - Dual-Link 292M Interface for1920x1080 Picture Raster

ANSI/SMPTE 424M- 2006, Television - 3 Gb/s Signal/Data Serial Interface

ANSI/SMPTE 425M- 2008, Television - 3 Gb/s Signal/Data SerialInterface - Source Image Format Mapping

ANSI/SMPTE 428-1- 2006, D-Cinema – Distribution Master (DCDM) -Image Characterisitcs

ANSI/SMPTE 428-9- 2008, D-Cinema – Distribution Master - ImagePixel Structure Level 3 – Serial Distribution Interface Signal Formatting

SMPTE 2010-2008, Vertical Ancillary Data Mapping of ANSI/SCTE 104Messages (Revision of RP 2010-2007)

ANSI/SMPTE 2016-1- 2007, Television – Format for Active FormatDescription and Bar Data

ANSI/SMPTE 2016-2- 2007, Television – Format for Pan-ScanInformation

ANSI/SMPTE 2016-3- 2007, Television – Vertical Ancillary DataMapping of Active Format Description and Bar Data

ANSI/SMPTE 2016-4- 2007, Television – Vertical Ancillary DataMapping of Pan-Scan Information

ANSI/SMPTE 2020-1- 2008, Television – Format of Audio Metadataand Description of the Asynchronous Serial Bitstream Transport

ANSI/SMPTE 2020-2- 2008, Television – Vertical Ancillary DataMapping of Audio Metadata – Method A

ANSI/SMPTE 2020-3- 2008, Television – Vertical Ancillary DataMapping of Audio Metadata – Method B

CIE Publication No 15.2, Colorimetry - Second Edition (1986),Central Bureau of the Commission Internationale de L'Eclairage,Vienna, Austria.

ITU-R BT.601-6-2007 - Studio Encoding Parameters of DigitalTelevision for Standard 4:3 and Wide-Screen 16:9 Aspect Ratios ITU-RBT.656-4 - Interfaces for Digital Component Video Signals in 525-lineand 625-line Television Systems Operating at the 4:2:2 Level ofRecommendation ITU-R BT-601 (Part A)

ITU-R BT.709-5-2002 - Parameter Values for the HDTV Standards forProduction and International Programme Exchange ITU-R BT.1120-2 -Digital Interfaces for 1125/60 and 1250/50 HDTV Studio Signals

SMPTE Engineering Guideline EG33-1998 - Jitter characteristics and measurements

SMPTE RP160-1997 - Three-Channel Parallel Analog ComponentHigh-Definition Video Interface

SMPTE RP165-1994 - Error Detection Checkwords and Status Flagsfor Use in Bit-Serial Digital Interfaces for Television

SMPTE RP168-2002 - Definition of Vertical Interval Switching Point forSynchronous Video Switching

Appendix D – Reference Standards and Practices for Television

Page 104: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

100 www.tektronix.com/video_audio

Margaret Craig, Television Measurements, NTSC Systems,Tektronix, 1994

Margaret Craig, Television Measurements, PAL Systems,Tektronix, 1991

Keith Jack, Video Demystified, A Handbook for the DigitalEngineer, HighText Interactive, 1996

David K. Fibush, A Guide to Digital Television Systems andMeasurements, Tektronix, 1997

David K. Fibush, Tektronix, Video Testing in a DTV World,SMPTE Journal, 2000

Earl F. Glynn, efg's Computer Lab, http://www.efg2.com/Lab

John Horn, Solving the Component Puzzle, Tektronix, 1997

Charles Poynton, A Technical Introduction to Digital Video,John Wiley & Sons, 1996

Charles Poynton, Frequently Asked Questions about Color,http://www.poynton.com/ColorFAQ.html, 1999

Charles Poynton, A Guided Tour of Color Space, 1997

Charles Poynton, YUV and Luminance considered harmful: Aplea for precise terminology in video, 2000

Guy Lewis, Applied Technology, Color and the DiamondDisplay, Broadcast Engineering, November, 1994

Michael Robin, Video Concepts, Miranda Technologies, 1999

Michael Robin and Michel Poulin, Digital TelevisionFundamentals, Design and Installation of Video and AudioSystems, McGraw-Hill, 1997

Peter D. Symes, Video Compression, FundamentalCompression Techniques and an Overview of the JPEG andMPEG Compression Systems, McGraw-Hill, 1998

Jerry C. Whitaker, Television Engineering Handbook, FeaturingHDTV Systems, Revised Edition by K. Blair Benson, McGraw-Hill,1992

Appendix E – Bibliography

SMPTE RP177-1993 - Derivation of Basic Television Color Equations

SMPTE RP178-1996 - Serial Digital Interface Checkfield for 10-Bit4:2:2 Component and 4fsc Composite Digital Signals

SMPTE RP184-1996 - Specification of Jitter in Bit-Serial Digital Interfaces

SMPTE RP186-2008 - Video Index Information Coding for 525- and625- Line Television Systems

SMPTE RP187-1995 - Center, Aspect Ratio and Blanking of Video Images

SMPTE RP192-2003 - Jitter Measurement Procedures in Bit-SerialDigital Interfaces

SMPTE RP198-1998 - Bit-Serial Digital Checkfield for Use in High-Definition Interfaces

SMPTE RP211-2000 (Archive 2006) - Implementation of 24P, 25P,and 30P Segmented Frames for 1920 x 1080 Production Format

SMPTE RP218-2002 - Specification for Safe Area and Safe Title Areasfor Television Systems

SMPTE RP219-2002 - High Definition, Standard Definition CompatibleColor Bar Signal

SMPTE RP221-2008 - Specification for Extraction of 4x3 Areas fromDigital 16x9 Images for Television Systems

SMPTE RP291-2006 - Assigned Ancillary Identification Codes

SMPTE RP2010-2007 - Vertical Ancillary Data Mapping ofANSE/SCTE 104 Messages

SMPTE RDD6-2008 - Television – Description and Guide to the Useof the Dolby E Audio Metadata Serial Bitstream

SMPTE RDD8-2008 - Storage and Distribution of Teletext Subtitlesand VBI Fata for High Definition Television

Appendix D – Reference Standards and Practices for Television (Continued)

Page 105: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

101www.tektronix.com/video_audio

4:2:2 – A commonly used term for a component digital video format. The details of the format are specified in the ITU-R BT.601standard document. The numerals 4:2:2 denote the ratio of thesampling frequencies of the single luminance channel to the twocolor-difference channels. For every four-luminance samples, there are two samples of each color-difference channel. SeeITU-R BT.601.

4fsc – Four-times subcarrier sampling rate used in composite digital systems. In NTSC, this is 14.3 MHz. In PAL, this is 17.7MHz. Standard definition component sampling is 13.5 MHz forluma, 6.75 for chroma in both 535/60 and 625/50 format.

AES/EBU audio – Informal name for a digital audio standardestablished jointly by the Audio Engineering Society and EuropeanBroadcasting Union organizations.

algorithm – A set of rules or processes for solving a problem in a finite number of steps.

aliasing – Defects in the picture typically caused by insufficientsampling or poor filtering of digital video. Defects are typically seen as jaggies on diagonal lines and twinkling or brightening in picture detail.

analog – An adjective describing any signal that varies continuously as opposed to a digital signal that contains discrete levels representing the binary digits 0 and 1.

ancillary data – Data supporting the video signal or program.Time multiplexed into the video signal during the horizontal and/orvertical blanking intervals. Ancillary data may be sent between theEAV and SAV packets in horizontal blanking and in larger blocksduring vertical blanking. Ancillary data may include checksums,multi-channel digital audio, and other data.

asynchronous – A transmission procedure that is not synchro-nized by a clock. Digital video is not asynchronous because sampling clock information must be extracted from data signaltransitions for decoding.

A-to-D Converter (analog-to-digital) – A circuit that uses digitalsampling to convert an analog signal into a digital representation ofthat signal.

bandwidth – 1) The difference between the upper and lower limits of a frequency, often measured in megahertz (MHz). 2) Thecomplete range of frequencies over which a circuit or electronicsystem can function with less than a 3 dB signal loss. 3) The information carrying capability of a particular television channel.

baseline shift – A form of low-frequency distortion resulting in ashift in the DC level of the signal.

bit – A binary representation of 1 or 0. One of the quantized levelsof a pixel.

bit parallel – Byte-wise transmission of digital video down a multi-conductor cable where each pair of wires carries a single bit. Thisstandard is covered under SMPTE 125M, EBU 3267-E and ITU-RBT.656.

bit serial – Bit-wise transmission of digital video down a singleconductor such as coaxial cable. May also be sent through fiberoptics. This standard is covered under ITU-R BT.656.

bit slippage – 1) Occurs when word framing is lost in a serial signal so the relative value of a bit is incorrect. This is generallyreset at the next serial signal, TRS-ID for composite and EAV/SAVfor component. 2) The erroneous reading of a serial bit streamwhen the recovered clock phase drifts enough to miss a bit. 3) Aphenomenon which occurs in parallel digital data buses when oneor more bits gets out of time in relation to the rest. The result iserroneous data. Differing cable lengths is the most common cause.

bit stream – A continuous series of bits transmitted on a line.

BNC – Abbreviation of “baby N connector.” A cable connectorused extensively in television.

brightness signal – Same as the luminance signal (Y). This signalcarries information about the amount of light at each point in the image.

byte – A complete set of quantized levels containing all of the bits.Bytes consisting of 8 to 10 bits per sample are typical.

cable equalization – The process of altering the frequencyresponse of a video amplifier to compensate for high-frequencylosses in coaxial cable.

CCIR – International Radio Consultative Committee (ComitéConsultatif International en Radiodiffusion), an international standards committee, now replaced by InternationalTelecommunication Union (ITU).

CCIR-601 – See ITU-R BT.601.

CCIR-656 – See ITU-R BT.656.

channel coding – Describes the way in which the “1”s and “0”sof the data stream are represented on the transmission path.

Appendix F – Glossary

Page 106: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

102 www.tektronix.com/video_audio

chroma key – The process of controlling the replacement of part of a video image with a second image. The control signal is developed from characteristics of the chrominance of a video signal.

chrominance signal, chroma – The modulated subcarrier side-bands in a composite video signal. Also used to describe the color-difference signals in a component system – that is, those carryinginformation about the hue (which color) and saturation (how muchcolor) in a pixel.

clock jitter – Timing uncertainty of the data cell edges in a digital signal.

clock recovery – The reconstruction of timing information fromincoming digital data.

coaxial cable – A transmission line with a concentric pair of signalcarrying conductors. There's an inner conductor and an outer conductive metallic sheath. The sheath aids in preventing externalradiation from affecting the signal on the inner conductor and minimizes signal radiation from the transmission line.

coding – Representing each level of a video signal as a number,usually in binary form.

coefficients – A number (often a constant) that expresses someproperty of a physical system in a quantitative way.

color correction – A process by which the coloring in a televisionimage is altered or corrected electronically. Care must be taken toinsure that the modified video does not exceed the limits of subsequent processing or transmission systems.

color-difference signals – Video signals which convey only colorinformation: For example, unmodulated R-Y and B-Y, I and Q, Uand V, Pr and Pb, etc.

component video signals – A set of signals, each of which represents a portion of the information needed to generate a fullcolor image: For example: R, G, and B; Y, 1, and Q; or Y, R-Y, and B-Y.

component analog – The unencoded output of a camera, videotape recorder, etc., consisting of three primary color signals:green, blue, and red (GBR) that together convey all necessary picture information. In some component video formats, these threecomponents have been translated into a luminance signal and twocolor-difference signals, for example, Y, B-Y, and R-Y.

component digital – A digital representation of a componentanalog signal set, most often Y'C'bC'r. The encoding parametersare specified by ITU-R BT.601. For standard definition formats, theparallel interface is specified by ITU-R BT.656 and SMPTE 125M(1991).

composite analog – An encoded video signal, such as NTSC orPAL video, that includes horizontal and vertical synchronizing information.

composite digital – A digitally encoded video signal, such as NTSC or PAL video, that includes horizontal and vertical synchronizing information.

contouring – Video picture defect due to quantizing at too coarsea level.

cross color – Spurious signals resulting from high-frequency luminance information being interpreted as color information indecoding a composite signal. Typical examples are “rainbows” on venetian blinds, striped shirts, etc.

cross luminance – Spurious signals occurring in the Y channel as a result of composite chroma signals being interpreted as luminance, such as “dot crawl” or “busy edges” on colored areas.

decoder – A device used to recover the component signals from acomposite (encoded) source. Decoders are used in displays and invarious processing hardware where component signals are requiredfrom a composite source, such as composite chroma keying orcolor correction equipment. Also used to represent a device forextracting video from a compressed signal.

delay – The time required for a signal to pass through a device or conductor.

demultiplexer (demux) – A device used to separate two or moresignals that were previously combined by a compatible multiplexerand transmitted over a single channel.

deserializer – A device that converts serial digital information to parallel.

differential gain – A change in chrominance amplitude of a videosignal caused by a change in luminance level of the signal.

differential phase – A change in chrominance phase of a videosignal caused by a change in luminance level of the signal.

digital components – Component signals in which the values foreach pixel are represented by a set of numbers.

digital word – The number of bits treated as a single entity by the system.

discrete – Having an individual identity. An individual circuitcomponent.

dither – Typically a random, low-level signal (oscillation) which maybe added to an analog signal prior to sampling. Often consists ofwhite noise of one quantizing level peak-to-peak amplitude.

Page 107: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

103www.tektronix.com/video_audio

dither component encoding – A slight expansion of the analogsignal levels so that the signal comes in contact with more quantiz-ing levels. The results are smoother transitions. This is done byadding white noise (which is at the amplitude of one quantizinglevel) to the analog signal prior to sampling.

drift – Gradual shift or change in the output over a period of timedue to change or aging of circuit components. Change is oftencaused by thermal instability of components.

D-to-A converter (digital-to-analog) – A device that convertsdigital signals to analog signals.

DVTR – Abbreviation of digital videotape recorder.

EAV – End of active video in component digital systems. One oftwo (EAV and SAV) timing reference packets.

EBU – European Broadcasting Union. An organization of Europeanbroadcasters that, among other activities, produces technical state-ments and recommendations for the 625/50 line television system.

EBU TECH.3267-E – The EBU recommendation for the parallelinterface of 625-line digital video signal. A revision of the earlierEBU Tech.3246-E, which in turn was derived from CCIR-601 (now ITU-R BT.601) and contributed to CCIR-656 (ITU-R BT.656)standards.

EDH (error detection and handling) – Proposed SMPTE RP165 for recognizing inaccuracies in the serial digital signal. It maybe incorporated into serial digital equipment and employ a simpleLED error indicator.

equalization (EQ) – Process of altering the frequency response of a video amplifier to compensate for high-frequency losses incoaxial cable.

embedded audio – Digital audio is multiplexed onto a serial digital data stream at the time allocated for ancillary data.

encoder – A device used to form a single (composite) color signalfrom a set of component signals. An encoder is used whenever acomposite output is required from a source (or recording) which isin component format. Also represents a device used for video compression.

error concealment – A technique used when error correction fails (see error correction). Erroneous data is replaced by data synthesized from surrounding pixels.

error correction – A scheme that adds overhead to the data topermit a certain level of errors to be detected and corrected.

eye pattern – An oscilloscope waveform view of overlaid highsand lows of the data signal. The changing data vs. the clock-synchronized sweep creates the look of an eye. The waveform isused to evaluate transport layer analog performance.

field-time (linear) distortion – An unwarranted change in videosignal amplitude that occurs in a time frame of a vertical scan (i.e.,16.66 Ms at 60 Hz).

format, interconnect – The configuration of signals used forinterconnection of equipment in a specified system. Different for-mats may use different signal composition, reference pulses, etc.

format, scanning – In analog and standard definition digital, thetotal number of lines and the field rate, i.e., 625/50. In digital high-definition, the number of luma pixels, the number of active videolines, the field rate, and the number of fields per frame, i.e.,1280/720/59.94/2:1.

format conversion – The process of both encoding/decodingand resampling of digital rates.

frequency modulation – Modulation of a sinewave or “carrier” byvarying its frequency in accordance with amplitude variations of themodulating signal.

frequency response rolloff – A distortion in a transmission system where the higher frequency components are not conveyedat their original full amplitude and create a possible loss of colorsaturation.

gain – Any increase or decrease in strength of an electrical signal.Gain may be expressed in decibels.

gamma – The transfer characteristic, input vs. output. In a television system, gamma correction is applied at the source toprovide additional gain in dark areas so as to compensate for theCRT and human vision. Gamma correction at the source avoidsenhancing noise at the destination and reduces the number of bitsnecessary to convey a satisfactory picture.

gamut – The range of colors allowed for a video signal. Valid color gamut is defined as all colors represented by all possiblecombinations of legal values of an R'G'B' signal. Signals in otherformats may represent colors outside valid gamut but still remainwithin their legal limits. These signals, when transcoded to R'G'B',will fall outside legal limits for R'G'B'. This may lead to clipping,crosstalk, or other distortions.

G'B'R', G'B'R' format – The same signals as R'G'B'. Thesequence is rearranged to indicate the mechanical sequence of the connectors in the SMPTE standard. Often parade displays

Page 108: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

104 www.tektronix.com/video_audio

group delay – A signal defect caused by different frequencieshaving differing propagation delays (delay at 1 MHz is different fromdelay at 5 MHz).

horizontal interval (horizontal blanking, interval) – The timeperiod between lines of active video.

interconnect format – See format.

interconnect standard – See standard.

interlace scanning – A scanning format where the picture is cap-tured and displayed in two fields. The second field is offset one-halfline horizontally from the first field to present the lines of each fieldvertically interposed between the lines of the other.

interpolation – In digital video, the creation of new pixels in theimage by some method of mathematically manipulating the valuesof neighboring pixels.

invalid signal – See valid signal.

I/O – Abbreviation of input/output. Typically refers to sending information or data signals to and from devices.

ITU-R – The International Telecommunication Union, RadioCommunication Sector (replaces the CCIR).

ITU-R BT.601 – An international standard for component digitaltelevision from which was derived SMPTE 125M (was RP-125) and EBU 3246E standards. ITU-R BT.601 defines the samplingsystems, matrix values, and filter characteristics for both Y, B-Y, R-Y and GBR component digital television.

ITU-R BT.656 – The physical parallel and serial interconnectscheme for ITU-R BT.601. ITU-R BT.656 defines the parallel connector pinouts as well as the blanking, sync, and multiplexingschemes used in both parallel and serial interfaces. Reflects definitions in EBU Tech 3267 (for 625-line signals) and in SMPTE125M (parallel 525) and SMPTE 259M (serial 525).

jaggies – Slang for the stair-step aliasing that appears on diagonallines. Caused by insufficient filtering, violation of the Nyquist Theory,and/or poor interpolation.

jitter – An undesirable random signal variation with respect to time.

keying – The process of replacing part of one television imagewith video from another image; i.e., chroma keying and insert keying.

legal/illegal – A signal is legal if it stays within the gamut appropri-ate for the format in use. A legal signal does not exceed the volt-age limits specified for the format of any of the signal channel. Anillegal signal is one that is sometimes outside those limits in one ormore channels. A signal can be legal but still not be valid.

luma, luminance (Y) – The video signal that describes theamount of light in each pixel; equivalent to the signal provided by a monochrome camera, Y is often generated as a weighted sum of the R', G', and B' signals.

MAC – Multiplexed Analog Component video. This is a means of time multiplexing component analog video down a single transmission channel such as coax, fiber, or a satellite channel.Usually involves digital processes to achieve the time compression.

microsecond (µs) – One millionth of a second: 1 x 10–6 or0.000001 second.

monochrome signal – A “single color” video signal – usually ablack and white signal but sometimes the luminance portion of acomposite or component color signal.

MPEG – Motion pictures expert group. An international group ofindustry experts set up to standardize compressed moving picturesand audio.

multi-layer effects – A generic term for a mix/effects system thatallows multiple video images to be combined into a compositeimage.

multiplexer (mux) – Device for combining two or more electricalsignals into a single, composite signal.

nanosecond (ns) – One billionth of a second: 1 x 10–9 or0.000000001 second.

neutral colors – The range of gray levels, from black to white, butwithout color. For neutral areas in the image, the R'G'B' signals willall be equal; in color-difference formats, the color-difference signalswill be zero.

NICAM (near instantaneous companded audio multiplex) –A digital audio coding system originally developed by the BBC forpoint-to-point links. A later development, NICAM 728 is used inseveral European countries to provide stereo digital audio to hometelevision receivers.

nonlinear encoding – Relatively more levels of quantization areassigned to small amplitude signals, relatively fewer to the largesignal peaks.

nonlinearity – Having gain vary as a function of signal amplitude.

NRZ – Non-return to zero. A coding scheme that is polarity sensitive. 0 = logic low; 1 = logic high.

NRZI – Non-return to zero inverse. A data coding system schemethat is polarity insensitive. 0 = no change in logic; 1 = a transitionfrom one logic level to the other.

Page 109: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

105www.tektronix.com/video_audio

NTSC (National Television Systems Committee) – The organization that formulated the “NTSC” system. Usually taken tomean the NTSC color television system itself, or its interconnectstandards. NTSC is the television standard currently in use in theU.S., Canada and Japan. NTSC image format is 4:3 aspect ratio,525 lines, 60 Hz and 4 MHz video bandwidth with a total 6 MHz ofvideo channel width. For detailed specifications of this format seeAppendix C

Nyquist sampling theorem – Intervals between successive samples must be equal to or less than one-half the period of high-est frequency.

orthogonal sampling – Sampling of a line of repetitive video signal in such a way that samples in each line are in the same horizontal position (co-timed).

PAL format – A color television format having 625 scan lines(rows) of resolution at 25 frames per second (25 Hz). For detailedspecifications of this format see Appendix C

PAL (Phase Alternate Line) – The name of the color televisionsystem in which the V component of burst is inverted in phase fromone line to the next in order to minimize hue errors that may occurin color transmission.

PAL-M – Uses a 3.57561149MHz subcarrier and 525 scanninglines. One frame is produced every 1/30 of a second. This format isprimarily used within Brazil. For detailed specifications of this formatsee Appendix C

PAL-N – Uses a 3.58205625 MHz subcarrier and 625 scanninglines. One frame is produced every 1/25 of a second. This format isprimarily used within Argentina. For detailed specifications of thisformat see Appendix C

parallel cable – A multi-conductor cable carrying parallel data.

patch panel – A manual method of routing signals using a panelof receptacles for sources and destinations and cables to inter-connect them.

peak to peak – The amplitude (voltage) difference between themost positive and the most negative excursions (peaks) of an electrical signal.

phase distortion – A picture defect caused by unequal delay(phase shifting) of different frequency components within the signalas they pass through different impedance elements – filters, amplifiers, ionospheric variations, etc. The defect in the picture is “fringing,” like diffraction rings, at edges where the contrastchanges abruptly.

phase error – A picture defect caused by the incorrect relativetiming of a signal in relation to another signal.

phase shift – The movement in relative timing of a signal in

relation to another signal.

pixel – The smallest distinguishable and resolvable area in a digitalvideo image. A single point on the screen. A single sample of thepicture. Derived from the words picture element.

PRBS – Pseudo random binary sequence.

primary colors – Colors, usually three, that are combined to produce the full range of other colors within the limits of a system.All non-primary colors are mixtures of two or more of the primarycolors. In television, the primary colors are specific sets of red,green, and blue.

production switcher (vision mixer) – A device that allows transitions between different video pictures. Also allows keying and matting (compositing).

progressive scanning – A scanning format where the picture iscaptured in one top-to-bottom scan.

propagation delay (path length) – The time it takes for a signalto travel through a circuit, piece of equipment, or a length of cable.

quantization – The process of converting a continuous analoginput into a set of discrete output levels.

quantizing noise – The noise (deviation of a signal from its original or correct value) which results from the quantizationprocess. In serial digital, a granular type of noise only present in the presence of a signal.

rate conversion – 1) Technically, the process of converting fromone sample rate to another. The digital sample rate for the compo-nent format is 13.5 MHz; for the composite format it’s either 14.3MHz for NTSC or 17.7 MHz for PAL. 2) Often used incorrectly toindicate both resampling of digital rates and encoding/decoding.

Rec. 601 – See ITU-R BT.601.

reclocking – The process of clocking the data with a regeneratedclock.

resolution – The number of bits (four, eight, ten, etc.) determinesthe resolution of the digital signal:

4-bits = A resolution of 1 in 168-bits = A resolution of 1 in 25610-bits = A resolution of 1 in 1024

Eight bits is the minimum acceptable for broadcast TV. RP 125 –See SMPTE 125M.

RGB, RGB format, RGB system – The basic parallel component set (Red, Green, and Blue) in which a signal is used for each primary color. Also used to refer to the related equipment, interconnect format, or standards, The same signalsmay also be called “GBR” as a reminder of the mechanicalsequence of connections in the SMPTE interconnect standard.

Page 110: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

106 www.tektronix.com/video_audio

rise time – The time taken for a signal to make a transition fromone state to another – usually measured between the 10% and90% completion points on the transition. Shorter or “faster” risetimes require more bandwidth in a transmission channel.

routing switcher – An electronic device that routes a user-sup-plied signal (audio, video, etc.) from any input to any user-selectedoutput(s).

sampling – Process where analog signals are captured (sampled)for measurement.

sampling frequency – The number of discrete sample measurements made in a given period of time. Often expressed in Megahertz for video.

SAV – Start of active video in component digital systems. One oftwo (EAV and SAV) timing reference packets.

scan conversion – The process of resampling a video signal toconvert its scanning format to a different format.

scope – Short for oscilloscope (waveform monitor) or vectorscopedevices used to measure the television signal.

scrambling – 1) To transpose or invert digital data according to a prearranged scheme in order to break up the low-frequency patterns associated with serial digital signals. 2) The digital signal is shuffled to produce a better spectral distribution.

SECAM – See Sequential Color and Memory. The French developed color encoding standard similar to PAL. The major differences between the two are that in SECAM the chroma is frequency modulated and the R’-Y’ and B’-Y’ signals are transmittedline sequentially. The image format is 4:3 aspect ratio, 625 lines, 50 Hz and 6 MHz video bandwidth with a total 8 MHz of videochannel width. For detailed specifications of this format seeAppendix C

segmented frames – A scanning format in which the picture iscaptured as a frame in one scan, as in progressive formats, buttransmitted even lines as one field then odd lines as the next fieldas in an interlaced format.

serial digital – Digital information that is transmitted in serial form.Often used informally to refer to serial digital television signals.

serializer – A device that converts parallel digital information toserial digital.

SMPTE (Society of Motion Picture and TelevisionEngineers) – A professional organization that recommends standards for the television and film industries.

SMPTE Format, SMPTE Standard – In component television,these terms refer to the SMPTE standards for parallel componentanalog video inter-connection.

standard, interconnect standard – Voltage levels, etc., thatdescribe the input/output requirements for a particular type ofequipment. Some standards have been established by professionalgroups or government bodies (such as SMPTE or EBU). Others aredetermined by equipment vendors and/or users.

still store – Device for storage of specific frames of video.

synchronous – A transmission procedure by which the bit andcharacter stream are slaved to accurately synchronized clocks,both at the receiving and sending end. In serial digital video, thesynchronous receiver sampling clock is extracted from incomingdata signal transitions.

sync word – A synchronizing bit pattern, differentiated from thenormal data bit patterns, used to identify reference points in the television signal; also to facilitate word framing in a serial receiver.

telecine – A device for converting motion picture film to a videosignal.

temporal aliasing – A visual defect that occurs when the imagebeing sampled moves too fast for the sampling rate. An example is a wagon wheel that appears to rotate backwards.

time base corrector – Device used to correct for time baseerrors and stabilize the timing of the video output from a tapemachine.

TDM (time division multiplex) – The management of multiplesignals on one channel by alternately sending portions of each signal and assigning each portion to particular blocks of time.

time-multiplex – In the case of digital video, a technique forsequentially interleaving data from the three video channels so theyarrive to be decoded and used together. In component digital for-mats, the sequence might be Y, Cb, Y, Cr, Y, Cb, etc. In this case Yhas twice the total data capacity (detail) as either of the color-differ-ence channels. Ancillary data would be time-multiplexed into thedata stream during non-video time.

TRS – Timing reference signals in composite digital systems (fourwords long). For component video, EAV and SAV provide the timing reference.

TRS-1D (timing reference signal identification) – A referencesignal used to maintain timing in composite digital systems. It’s fourwords long.

Page 111: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

A Guide to Standard and High-Definition Digital Video MeasurementsPrimer

107www.tektronix.com/video_audio

truncation – Deletion of lower significant bits on a digital system.

valid signal – A video signal where all colors represented lie withinthe valid color gamut. A valid signal will remain legal when translat-ed to RGB or other formats. A valid signal is always legal, but alegal signal is not necessarily valid. Signals that are not valid will beprocessed without problems in their current format, but problemsmay be encountered if the signal is translated to a new format.

valid/invalid – A valid signal meets two constraints: It is legal inthe current format, and it will remain legal when properly translatedto any other color signal format.

VTR (video tape recorder) – A device which permits audio and-video signals to be recorded on magnetic tape.

waveform – A graphical representation of the relationship betweenvoltage or current and time.

word – See byte.

Y, C1, C2 – A generalized set of CAV signals: Y is the luminancesignal, C1 is the 1st color-difference signal, and C2 is the 2ndcolor-difference signal.

Y', C'b, C'r – A gamma-corrected set of color-difference signalsused in digital component formats.

Y, I, Q – The set of CAV signals specified in 1953 for the NTSCsystem: Y is the luminance signal, I is the 1st color-difference signal, and Q is the 2nd color-difference signal.

Y, Pb, Pr, – A version of (Y R-Y B-Y) specified for the SMPTE analog component standard.

Y, R-Y, B-Y – The general set of CAV signals used in the PAL system as well as for some composite encoder and most composite decoders in NTSC systems. Y is the luminance signal,R-Y is the 1st color-difference signal, and B-Y is the 2nd color-difference signal.

Y, U, V – Luminance and color-difference components for PAL systems. Often imprecisely used in conversation as an alternativeto Y', P'b, P'r.

A Glossary of video terms and acronyms is available from theTektronix website. Literature Number 25W-15215-1.

AcknowledgementsThe authors wish to thank the following people for their contributions to this booklet:

David Fibush, Jeff Noah, Margaret Craig, Dan Baker, Ken

Ainsworth, Lionel Durant, and all of the people at Tektronix who

have put so much time and study into the art of good video.

The standards committee members for their development of

standards providing a clear direction for the industry.

The engineers, authors, and publishers who provide an education

for all of us.

About the authorsGuy Lewis studied physics, mathematics, and communications atBaylor and Texas A&M Universities and has served as a televisionstation chief engineer and television group director of engineering.He joined Tektronix in 1987 after 20 years with RCA in broadcastfield engineering, sales management, and product line management.He retired from Tektronix in 1999, where he was Product MarketingManager, TV Test, responsible for television waveform monitoringand baseband signal generation products.

Michael Waidson received a B.S. in communications from theUniversity of Kent at Canterbury, U.K. He began his career with anelectronics manufacturer of consumer television sets, working onadvanced television receiver design in the digital video department.For the past 12 years, he has worked in the television broadcastindustry in Europe and the U.S. He is currently an application engineer in Tektronix’ Video Business Unit.

DisclaimerThroughout this booklet, we describe ways digital television couldbe done. Devices will improve, and clever engineers will invent newways to do things better and more economically. The importantthing is to comply with standards as they evolve in order to maintain a high degree of economical compatibility. Enjoy the transition!

This booklet is an interpretation of information from sourcesbelieved to be reliable. It is intended to provide a broad under-standing of many different individually standardized formats. NeitherTektronix nor the authors guarantee the accuracy or completenessof the information and shall not be held responsible for any errorsor omissions arising out of the use of this document. The reader isurged to consult industry standards organizations for specific information.

Page 112: A Guide to Standard and High-Definition Digital Video ... to Standard HD Digital... · A Guide to Standard and High-Definition Digital Video Measurements ... of television, video

For Further InformationTektronix maintains a comprehensive, constantly expandingcollection of application notes, technical briefs and otherresources to help engineers working on the cutting edge oftechnology. Please visit www.tektronix.com

Copyright © 2009, Tektronix. All rights reserved. Tektronix products are covered by U.S. and foreign patents, issued and pending. Information in this publicationsupersedes that in all previously published material. Specification and pricechange privileges reserved. TEKTRONIX and TEK are registered trademarks of Tektronix, Inc. All other trade names referenced are the service marks, trademarks or registered trademarks of their respective companies. 01/09 EA/ 25W-14700-6

Contact Tektronix:ASEAN / Australasia (65) 6356 3900

Austria +41 52 675 3777

Balkans, Israel, South Africa and other ISE Countries +41 52 675 3777

Belgium 07 81 60166

Brazil +55 (11) 3759 7600

Canada 1 (800) 661-5625

Central East Europe, Ukraine and the Baltics +41 52 675 3777

Central Europe & Greece +41 52 675 3777

Denmark +45 80 88 1401

Finland +41 52 675 3777

France +33 (0) 1 69 86 81 81

Germany +49 (221) 94 77 400

Hong Kong (852) 2585-6688

India (91) 80-22275577

Italy +39 (02) 25086 1

Japan 81 (3) 6714-3010

Luxembourg +44 (0) 1344 392400

Mexico, Central, South America and Caribbean 52 (55) 54247900

Middle East, Asia and North Africa +41 52 675 3777

The Netherlands 090 02 021797

Norway 800 16098

People’s Republic of China 86 (10) 6235 1230

Poland +41 52 675 3777

Portugal 80 08 12370

Republic of Korea 82 (2) 6917-5000

Russia & CIS +7 (495) 7484900

South Africa +27 11 206 8360

Spain (+34) 901 988 054

Sweden 020 08 80371

Switzerland +41 52 675 3777

Taiwan 886 (2) 2722-9622

United Kingdom & Eire +44 (0) 1344 392400

USA 1 (800) 426-2200

For other areas contact Tektronix, Inc. at: 1 (503) 627-7111