Training Report Final - Copy (1)

Embed Size (px)

Citation preview

  • 8/2/2019 Training Report Final - Copy (1)

    1/44

    INDUSTRIAL TRAINING SEMINAR REPORT

    on

    Doordarshan studios and Broadcasting

    Submitted in partial fulfillment of the requirement

    for the award of the

    Degree of

    Bachelor of Technology

    in

    Electronics and Communication Engineering

    Submitted by:

    DIVYA CHOUDHARY

    PARIDHI SHARMA

    BABITA CHOUDHARY

    SHRUTI GUPTA

    JULY 2011

    Mody Institute of Technology and Science(A deemed university u/s 3 of UGC Act 1956)

    Lakshmangarh, Sikar 332311 (Rajasthan)

  • 8/2/2019 Training Report Final - Copy (1)

    2/44

    ACKNOWLEDGEMENT

    It has been indeed my privilege to receive scintillating supervision

    of all the members of organization who has always been helpful and

    kind to devote time and supervise me during training and extend all

    possible help in spite of theirbusy schedule.

  • 8/2/2019 Training Report Final - Copy (1)

    3/44

    Contents

    1.Doordarshan1.1What is television

    1.1About doordarshan1.2History1.3Present status

    2.Doordarshan studio set up2.1Television broadcasting system

    2.2Studio floor2.3Production control room

    2.4 Mastercontrol room2.5 Otherfacilities

    2.6 Vision mixer

    3.1Charactergenerator3.2Camera control unit3.3Video taperecorder3.4Video cassetterecorder3.5Video monitor3.6Mixingconsole3.7Sync pulsegenerator3.8Lighting system of studio3.9Audio pick up3.10 System blanking

  • 8/2/2019 Training Report Final - Copy (1)

    4/44

  • 8/2/2019 Training Report Final - Copy (1)

    5/44

    6.Satellite Communication6.1 Direct broadcasting satellite

    6.1.1 Geostationary orbit

    6.1.2 Footprints

    6.1.3 Beam width

    6.2 Earth Station

    6.3 Uplinking and Downlinking

    6.4 Transponder

    7. Conclusion

    1. What is TELEVISION?The word television is derived from Greek language and means to see at a distance.However,the up-to-date definition of television is more specific and describes television as theelectrical transmission of visual scenes and images by wire orradio, in such a rapidsuccession so as to produce the illusion of being able to witness theevents as they occuratthe transmitterend. The images can bereproduced in shades of light between black andwhite ofcolorand is accompanied by sound transmitted by an associated electrical soundchannel.

    About Doordarshan

    Doordarshan is a public television broadcasterof India and a division ofPrasarBharti,and nominated by thegovernment of India. It is one of the largest broadcasting organisationsin the world in terms of the infrastructureof studios and transmitters.Doordarshan Kendra is amilestone in the field ofentertainment and education media source.Here many culture andideas arecombing to produce a programme the whole process in DDK is like bloodcirculation in body.

    History:Doordarshan had a modest beginning with theexperimental telecast starting inDelhi in September1959 with a small transmitterand a makeshift studio.Theregulardailytransmission started in 1965 as a part of All India Radio. Till 1975, seven Indian cities hadtelevision service and Doordarshan remained the only television channel in India.DDK, JAIPUR: On 1st June 1987, JaipurDoordarshan Kendra was setup at Jhalana Doongriand the transmission started on 6th July 1987. Fron 2nd October1993, theLPTs located atAjmer, Udaipur, Jodhpurand Bikanerand HPT at Bundi wereconnected with DDK Jaipurvia satellite. The high powertransmitterofDDK, Jaipuris situated at Nahargarh.Present Status:

  • 8/2/2019 Training Report Final - Copy (1)

    6/44

    Doordarshan Jaipur is the only program production centerin Rajasthan. The studios arehoused at Jhalana Doongri, Jaipurand the transmitteris located at the Nahargarh Fort. As perthecencus figures of2001, thechannel covers 79% by population and 72% by area ofRajasthan. On 1/5/95 telecast ofDD-2 program commenced from Jaipur. Now DD-2converted as DD News is being telecast from a 10KW HPT set up in 2000. Thereach of the

    News channel is 11% by area and 32% by population. There are74.85% TV and 35.83%

    cable homes in urban Rajasthan and 25.69% TV,7% Cable homes in rural Rajasthan(NRS-2002).Presently this Kendra originates overfourhours daily programming(25 hrs and 30minutesweekly) in Hindi and Rajasthani.Programs are also telecast in Sindhi, Urdu, English andSanskrit. This Kendra originates two news bulletins daily one in Hindiand one in rajasthaniand feeds important stories for the national bulletins includingregularcontribution in RajyonSeSamacharat 1740 hrs daily on DD News. Majorsports events arecovered fornationaltelecast in live/recorded mode.Programs contributions are also sent fornational telecast.

    2. DOORDARSHAN STUDIO SET UP

    2.1 Television broadcasting system

    A television studio is an installation in which television or video productions take place,

    either for live television, forrecording live to tape, or for the acquisition ofraw footage for

    postproduction. The design of a studio is similar to, and derived from, movie studios, with a

  • 8/2/2019 Training Report Final - Copy (1)

    7/44

    few amendments for the special requirements of television production. A professional

    television studio generally has several rooms, which are ke pt separate for noise and

    practicality reasons. Theserooms areconnected via intercom, and personnel will be divided

    among these workplaces. Generally, a television studio consists of the followingrooms:

    y 1 Studio floory 2Production control roomy 3 Mastercontrol room

    2.2 Studio floor

    The studio floor is the actual stage on which the actions that will berecorded take place. A

    studio floorhas the followingcharacteristics and installations:

    y decoration and/orsetsy cameras on pedestalsy microphonesy lightingrigs and the associated controllingequipment.y several video monitors forvisual feedback from the production control roomy a small public address system forcommunicationy A glass window between PCR and studio floor for direct visual contact is usually

    desired, but not always possible

    While a production is in progress, the following people work in the studio floor.

    y The on-screen "talent" themselves, and any guests - the subjects of the show.y A floor director, who has overall charge of the studio area, and who relays timing and

    otherinformation from the director.

    y One or morecamera operators who operate the television cameras.2.3 Production Control Room

  • 8/2/2019 Training Report Final - Copy (1)

    8/44

    The production control room (also known as the 'gallery') is the place in a television studio in

    which thecomposition of the outgoing program takes place. Facilities in a PCR include:

    y a video monitor wall, with monitors for program, preview, videotape machines,cameras,graphics and othervideo sources

    y switchera device where all video sources arecontrolled and taken to air. Also knownas a special effects generator

    y audio mixingconsole and otheraudio equipment such as effects devicesy charactergeneratorcreates the majority of the names and full screen graphics that are

    inserted into the program

    y digital video effects and/orstill frame devices (if not integrated in the vision mixer)y technical director's station, with waveform monitors, vectorscopes and the camera

    control units orremotecontrol panels forthecamera control units (CCUs)

    y VTRs may also be located in thePCR, but are also often found in thecentral machineroom .

    2.4 Master Control Room

    The mastercontrol room houses equipment that is too noisy orruns too hot forthe production

    control room. It also makes sure that wire lengths and installation requirements kee p within

    manageable lengths, since most high-quality wiringruns only between devices in this room.

    This can include:

    y The actual circuitry and connection boxes of the vision mixer, DVE and charactergeneratordevices

    y camera control unitsy VTRs

  • 8/2/2019 Training Report Final - Copy (1)

    9/44

    y patch panels for reconfiguration of the wiring between the various pieces ofequipment.

    2.5 Other facilities

    A television studio usually has otherrooms with no technical requirements beyond program

    and audio monitors. Among them are:

    y one or more make-up and changingroomsy a reception area forcrew, talent, and visitors,commonly called thegreen room

    2.6 Video ChainWe all know that video we see at ourhome is eitherpre-recorded in studio or live telecasted.But we dont know the path of this video signal from studio to ourhome orfrom cricketground to ourhome.Here the simplechain of video from studio to ourhome is explained in brief.

    TransmitterEarth

    StationIn first chain we will understand studio program recording. Camera output from the studiohall is sent to CCU. CCU is thecamera control unit. Many parameters of video signals arecontrolled form CCu.The output signal of the CCU aftermaking all thecorrection is sent to

    NM(VISION MIXER) in PCR 1(production control room). Output of3 to 4cameras comeshere and final signal that we see at home is selected here using VM according to directorschoice. VM is computerbased system by PINNACLE used to add transition and many

    Studio 1(VideoCamera)

    C.C.U.(MonitorWaveformAnalyzer)

    P.C.R.1(VisionMixer)

    V.T.R

    Studio 2(Videocamera)

    P.C.R.2(Vision mixerCH.G )

    M.S.R.

  • 8/2/2019 Training Report Final - Copy (1)

    10/44

    othereffects likechroma keying between two selected camera outputs. The final signal fromVM is sent to VTR(Video taperecording).VTR uses both analog and digital taperecordingsystem.

    Audio Chain

    As we understood video chain, audio chain is also interesting to know, and easierthan videochain.

    In studio program, audio from studio microphones is directly fed to the AUDIOCONSOLE place in PCR-1. Audio console offers a range of multitrack mixing systems withexceptional flexibility. Audio console used to mix audio from different sources and maintainits output. From AC, signal is directly recorded on tape with video signal in VTR.

  • 8/2/2019 Training Report Final - Copy (1)

    11/44

    2.7 Vision Mixer

    A vision mixer (also called video switcher, video mixer or production switcher) is a

    device used to select between several different video sources and in some cases composite

    (mix) video sources togetherand add special effects. This is similar to what a mixingconsole

    does foraudio.

    Explanation

    Typically a vision mixerwould be found in a professional television production environment

    such as a television studio, cable broadcast facility, commercial production facility or linear

    video editing bay. The term can also referto the person operating the device.

    Vision mixer and video mixer are almost exclusively European terms to describe both the

    equipment and operatorsSoftware vision mixers are also available.

    Capabilities and usage in TV Productions

    Besides hard cuts (switching directly between two input signals), mixers can also generate a

    variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision

    mixers can perform keying operations and generate color signals (called mattes in this

    context). Most vision mixers are targeted at the professional market, with newer analog

    models having component video connections and digital ones using SDI. They are used inlive and video taped television productions and for linear video editing,even though the use

    of vision mixers in video editing has been largely supplanted by computer based non-linear

    editing.

  • 8/2/2019 Training Report Final - Copy (1)

    12/44

    3.1 Character generator

    A character generator (CG for short) is a device or software that produces static or

    animated text (such as crawls and rolls) for keying into a video stream. Modern character

    generators are actually computer-based, and can generategraphics as well as text.

    Character generators are primarily used in the broadcast areas of live sports or newspresentations, given that the modern character generator can rapidly (i.e., "on the fly")

    generate high-resolution, animated graphics for use when an unforseen situation in thegame

    ornewscast dictates an opportunity forbroadcast coverage -- forexample, when, in a football

    game, a previously unknown playerbegins to have what looks to become an outstanding day,

    thecharactergenerator operatorcan rapidly, using the "shell" of a similarly-designed graphic

    composed for another player, build a new graphic for the previously unanticipated

    performance of the lesser known player. The character generator, then, is but one of many

    technologies used in the remarkably diverse and challenging work of live television, where

    events on the field or in the newsroom dictate the direction of the coverage. In such an

    environment, thequality of the broadcast is only as good as its weakest link, both in terms of

    personnel and technology. Hence, character generator development never ends, and the

    distinction between hardware and software CG's begins to blur as new platforms and

    operating systems evolve to meet the live television

    Hardware CGs

    Hardware CGs are used in television studios and video editing suites. A DTP-like interface

    can be used to generate static and moving text or graphics, which the device then encodes

    into some high-quality video signal, like digital SDI or analog component video, high

    definition or even RGB video. In addition, they also provide a key signal, which the

  • 8/2/2019 Training Report Final - Copy (1)

    13/44

    compositing vision mixercan use an alpha channel to determine which areas of the CG video

    are translucent.

    Software CGs

    Software CGs run on standard off-the-shelf hardware and are often integrated into video

    editing software such as nonlinear video editing applications.Some stand-alone products are

    available, however, for applications that do not even attempt to offertext generation on their

    own, as high-end video editing software often does, or whose internal CG effects are not

    flexible and powerful enough. Some software CGs can be used in live production with

    special software and computer video interface cards. In that case, they are equivalent to

    hardware CGs.

    3.2 Camera control unit

    The camera control unit (CCU) is installed in the production control room (PCR), and

    allows various aspects of the video camera on the studio floor to becontrolled remotely. The

    most commonly made adjustments are for white balance and aperture, although almost all

    technical adjustments are made from controls on the CCU rather than on the camera. This

    frees the camera operator to concentrate on composition and focus, and also allows the

    technical director of the studio to ensure uniformity between all thecameras.

    As well as acting as a remotecontrol, the CCU usually provides theexternal interfaces for the

    camera to otherstudio equipment, such as the vision mixerand intercom system, and contains

    thecamera's powersupply.

  • 8/2/2019 Training Report Final - Copy (1)

    14/44

    3.3 Video Tape Recorder

    A video tape recorder (VTR), is a taperecorder that can record video material. The video

    cassette recorder (VCR), where the videotape is enclosed in a user-friendly videocassette

    shell, is the most familiar type of VTR known to consumers. Professionals may use othertypes of video tapes and recorders.

    Professional cassette / cartridge based systems

    y U-matic (3/4")y Betacam (Sony)y M-II (Panasonic)y Betacam SP (Sony)

    Standard definition Digital video tape formats

    y D1 (Sony) and Broadcast Television Systems Inc.y D2 (Sony and Ampex)y Digital Betacam (Sony)y Betacam IMX (Sony)y DVCAM (Sony)y DVCPRO (Panasonic)

    3.4 Video cassette recorder

    A VCR.

    The videocassette recorder (or VCR, more commonly known in the British Isles as the

    video recorder), is a type of video tape recorder that uses removable videotape cassettes

    containing magnetic tape to record audio and video from a television broadcast so it can be

    .

  • 8/2/2019 Training Report Final - Copy (1)

    15/44

    played back later. Many VCRs have their own tuner (for direct TV reception) and a

    programmable timer(forunattended recording of a certain channel at a particulartime).

    3.5 Video monitor

    A video monitor is a device similar to a television, used to monitorthe output of a video

    generating device, such as a video camera, VCR, orDVD player. It may ormay not have

    audio monitoringcapability.

    Unlike a television, a video monitor has no tuner and, as such, is unable to independently

    tune into an over-the-airbroadcast.

    Onecommon use of video monitors in is Television stations and Outside broadcast vechicles,

    where broadcast engineers use them for confidence checking of signals throughout the

    system.

    Video monitors are also used extensively in the security industry with Closed-circuit

    television cameras and recording devices.

    Common display types forvideo monitors

    y Cathoderay tubey Liquid crystal displayy Plasma display

    Common monitoring formats forbroadcasters

    y Serial Digital Interface (SDI, as SD-SDI orHD-SDI)y Composite videoy Component video

  • 8/2/2019 Training Report Final - Copy (1)

    16/44

    3.6 Mixing Console

    In professional audio, a mixing console, digital mixing console, mixing desk (Brit.), or

    audio mixer, also called a sound board or soundboard, is an electronic device for

    combining (also called "mixing"),routing, and changing the level, tone, and/or dynamics ofaudio signals. A mixercan mix analog ordigital signals, depending on the type of mixer. The

    modified signals (voltages or digital samples) are summed to produce the combined output

    signals.

    Mixing consoles are used in many applications, including recording studios, public address

    systems, sound reinforcement systems, broadcasting, television, and film post-production. An

    example of a simple application would be to enable the signals that originated from two

    separate microphones (each being used by vocalists singing a duet, perhaps) to be heard

    through one set of speakers simultaneously. When used for live performances, the signal

    produced by the mixer will usually be sent directly to an amplifier, unless that particular

    mixeris powered or it is beingconnected to powered speakers.

    Further channel controls affect the equalization of the signal by separately attenuating or

    boosting a range of frequencies (e.g., bass, midrange, and treble frequencies). Most large

    mixing consoles (24 channels and larger) usually have sweep equalization in one or more

    bands of its parametric equalizer on each channel, where the frequency and affected

    bandwidth ofequalization can be selected. Smaller mixing consoles have few or no

    equalization control. Some mixers have a general equalization control (either graphic or

    parametric).

    Each channel on a mixer has an audio taper pot, or potentiometer, controlled by a sliding

    volume control (fader), that allows adjustment of the level, or amplitude, of that channel in

  • 8/2/2019 Training Report Final - Copy (1)

    17/44

    the final mix. A typical mixingconsole has many rows of these sliding volumecontrols. Each

    control adjusts only its respectivechannel (or one half of a stereo channel); therefore, it only

    affects the level of the signal from one microphone or other audio device. The signals are

    summed to create the main mix, orcombined on a bus as a submix, a group ofchannels that

    are then added to get the final mix (for instance, many drum mics could be grouped into abus, and then the proportion of drums in the final mixcan becontrolled with one bus fader).

    There may also beinsertpoints fora certain bus, oreven theentire mix.

    On theright hand of the console, there are typically one or two master controls that enable

    adjustment of theconsole's main mix output level.

    Finally, there are usually one or more VU or peak meters to indicate the levels for each

    channel, or for the master outputs, and to indicate whether the console levels are over

    modulating or clipping the signal. Most mixers have at least one additional output, besides

    the main mix. These areeitherindividual bus outputs, orauxiliary outputs, used, for instance,

    to output a different mix to on-stage monitors. The operator can vary the mix (or levels of

    each channel) foreach output.

    As audio is heard in a logarithmic fashion (both amplitude and frequency), mixing console

    controls and displays are almost always in decibels, a logarithmic measurement system. This

    is also why special audio taperpots orcircuits are needed.Since it is a relative measurement,and not a unit itself (like a percentage), the meters must bereferenced to a nominal level. The

    "professional" nominal level is considered to be +4 dBu. The "consumergrade" level is 10

    dBV.

    3.7 Sync Pulse Generator

    A sync pulsegenerator or a sync signal generator as it is often called,comprises: (i)crystal

    controlled or mains locked timing system, (ii) pulse shapers that generaterequired trains for

    blanking, synchronisation and deflection drives, and (iii) amplifier distributors that supplythese pulses to various studio sources in a studio complex.

    The timing unit in the sync pulse generator has a master oscillator at a frequency of about

    2H, that can be synchronised by: (1) a crystal oscillator, at 2H (31,250 Hz) exactly, (2) an

    external 2H frequency source or (3) the ac mains frequency with the help of a phase detector

    and an AFC circuit that compares 50 Hz vertical frequency rate with the mains frequency.

  • 8/2/2019 Training Report Final - Copy (1)

    18/44

    Therequired pulse timings at H and V rate are derived from the2H masteroscillatorthrough

    frequency dividers as shown in the figure. The blanking and sync pulses are derived from the

    2H,H and V pulses employing suitable pulse shapers and pulse adders orlogicgates.

    3.8 Lighting system of studio

    Basically the fittings employ incandescent lamps and quartz iodine lamps at appropriate

    color temperatures. Quartz iodine lamps are also incandescent lamps with quartz glass

    envelope and iodine vapouratmosphere inside. These lamps are more stable in operation and

    color temperature with respect to aging. The lamp fittings generally comprise spot lights of0.5 kW and I kW and broads of 1 kW,2 kW and 5 kW. Quartz iodine lamps of 1 kW provide

    flood lights. A number of these fittings are suspended from the top so that they can be

    adjusted unseen. The adjustments forraising and loweringcan be done by (i) hand operation

    for smaller suspensions, (ii) winch motor operated controls for greater mechanical loads of

    batten suspensions carrying a numberof light fittings, (iii) unipole suspensions carrying wells

    of light fittings manipulated from a catwalk of steel structure at the top ceiling where the

    poles carrying these areclamped.

    The lighting is controlled by varying theeffectivecurrent flow through the lamps by means

    of silicon controlled rectifier (SCR) dimmers. These enable the angle of current flow to be

    continuously varied by suitable gate-triggering signals. The lighting patch panels and SCR

    dimmercontrols for the lights are provided in a separateroom. The lighting is energized and

    controlled by switches and faders on the dimmer console in the PCR, from the technical

  • 8/2/2019 Training Report Final - Copy (1)

    19/44

    presentation panel. The lighting has to prevent shadows and produce desired contrast effects.

    Following are some of the terms used in lighting.

    High key: lighting is the lighting that gives a picture that has gradations that fall between

    gray shades and white,confining darkgray and black to few areas as in news reading, paneldiscussions,etc.

    Low key: lighting is the lighting that gives picture havinggradations falling from gray to

    black with few areas of light gray orwhite.

    Key light: is the principal source of direct illumination often with hinged panels or

    shutters to control the spread 'of the light beam.

    Fill light: is the supplementary soft light to fill details to reduce shadow contrast range.

    Back light: is the illumination from behind the subject in the plane ofcamera optical axis,

    to provide 'accent lighting' to bring out the subject against the background orthe scene.

    3.9 Audio Pick-up

    For sound arrangement, the microphone placement technique depends upon the type of

    program. In some cases, e.g. discussions, news and musical programs, the mikes may be

    visible to the viewers and these can be put on a desk or mounted on floor stands. In other

    programs, for instance, dramas, the mikes must be out of view.Such programs require hidden

    microphones or a boom-mounted mike with a boom operator. A unidirectional microphone

    mounted on the boom arm, high enough to be out of sight, is desirable here. The boomoperator must manipulate the boom properly. Lavaliere microphones and hidden

    microphones are also useful in such programs.

    In a television studio, there is considerable ambient noise resulting from off-the-camera

    activity, hence directional mikes are frequently used. The studio walls and ceilings are treated

    with sound absorbing material to make them as dead as possible. Artificial reverberation is

    then required to achieve properaudio quality.

    3.10 System Blanking

    When cameras are placed at different locations, they may 'require different camera lengths

    and hence the line drive pulses applied to the cameras may be unequally delayed by the

    propagation delay of the cable, which is around 0.15us/l00 ft of the cable. This can cause a

    time difference between the cameras proportional to the cable length differences, and the

  • 8/2/2019 Training Report Final - Copy (1)

    20/44

    raster in the picture monitor will shift slightly as the cameras are switched over. System

    blanking is useful in overcoming this time difference between the two camera signals arriving

    at the vision mixer unit. The system blanking is much longer in duration and encompasses

    both thecamera blanking periods. The system line blanking is 12s whereas thecamera line

    blanking is only 7s. This avoids the shift in therasterfrom being observed.In recent cameras, the time difference due to the differences in camera cable lengths is

    offset by auto phasingcircuits which ensure that the video signals arriving from all cameras

    are all time-coincident irrespective of their cable lengths. Once the circuit is adjusted, the

    cable length has no effect on the timings. Even in such cases, the system blanking is

    necessary to mask off the unwanted oscillations or distortions at the end or start of the

    scanning line.

    3.11 Colour Sync Pulse Generators

    Older monochrome video sourceequipment used four-line standard pulses to theequipment,

    viz. MS,LD, FDand MB pulses. A limited number ofcolour TV equipment used these four

    sets of pulses plus thecolour subcarrier CSC, thePAL indent flag and thecolour burst gate.

    The next generation colourequipment of solid state design, produced three line distribution,

    viz. MS, MB and the CSC. Modern equipment employing LSI circuits use self-contained

    sync generators that require only a single reference pulse for operation. The colour-black

    signal with the black burst is taken as the de facto standard for single-line distribution. The

    sync and the subcarrier must becarefully separated from video in orderto maintain theexact

    timing

    3.12 Professional video camera

    A Professional video camera (often called a Television camera even though the use has

    spread) is a high-end device forrecordingelectronic moving images (as opposed to a movie

    camera, that records the images on film). Originally developed for use in television studios,

    they are now commonly used forcorporate and educational videos, music videos, direct-to-

    video movies,etc.

    3.13 Studio Cameras

  • 8/2/2019 Training Report Final - Copy (1)

    21/44

    It is common forprofessional cameras to split the incoming light into the three primary colors

    that humans are able to see, feedingeach color into a separate pickup tube (in oldercameras)

    orcharge-coupled device (CCD).Some high-end consumercameras also do this, producing a

    higher-quality image than is normally possible with just a single video pickup.

    3.14 ENG Cameras

    Often used in independent films, ENG video cameras are similarto consumercamcorders,

    and indeed the dividing line between them is somewhat blurry, but a few differences are

    generally notable:

    y They are bigger, and usually have a shoulderstock forstabilizing on thecameraman'sshoulder

    y They use3 CCDs instead of one (as is common in digital still cameras and consumerequipment), one foreach primary color

    y They haveremovable/swappable lensesy All settings like white balance, focus, and iris can be manually adjusted, and

    automatics can becompletely disabled

    y If possible, these functions will beeven adjustable mechanically (especially focus andiris), not by passing signals to an actuatorordigitally dampening the video signal.

    y They will have professional connectors - BNC forvideo and XLR foraudioy A complete timecode section will be available, and multiplecameras can be

    timecode-synchronized with a cable

    y "Bars and tone" will be available in-camera (the bars areSMPTE (Society of MotionPicture and Television Engineers) Bars similarto those seen on television when a

    station goes off the air, the tone is a test audio tone)

    3.15 Parts of a Camera:--

    Lens Turret- a judicious choice of the lens can considerably improve thequality of image ,depth of field and the impact which is intended to be created on the viewer. Accordingly a

    number of different viewing angles are provided. Their focal lengths are slightly adjusted by

    movement of the front element of the lens located on the lens assembly .

  • 8/2/2019 Training Report Final - Copy (1)

    22/44

    Zoom Lens- a zoom lens has a variable focal length with an angle of 10:1 or more in this lens

    the viewing angle and field view can be varied without loss of focus . This enables dramatic

    close up control of the smooth and gradual change of focal length by thecamera- man while

    televising a scene appears to viewers if he approaching orrecording from the scene.

    Camera Mounting- studio camera is necessary to be able to move camera up and down and

    arround its centre axis to pic-up different sections of the scene.

    View Finder- to permit thecamera operator to frame the scene and maintain proper focus of

    an electronic view finder is provided with most Tv camera. It receive video signals from the

    control room stabilizing amplifier.The view finder has its own diflection circuitry as in any

    other monitor , to produce the raster. The view finder also has a built in dc restorer for

    maintaining average brightness of the scene televised.

    3.16 Pal Encoder

    Thegamma corrected RGB signals arecombined in the Y-matrix to form the Y signal.the U-

    V matrix combines the R,B and Y signals to obtain R-Y and B-Y, which are weighted to

    obtain U and V signals. weighting by the factor 0.477 for R-Y, and 0.895 for prevents

    overmodulation on saturated colours.This gives:

    Y= 0.30R+0.59+0.11B

    U=0.477(R-Y)

    V=0.895(B-Y)

  • 8/2/2019 Training Report Final - Copy (1)

    23/44

    3.17 Outside Broadcasting

    Outside Broadcasting is the production of television programmes (typically to cover news

    and sports events) from a mobile television studio. This mobilecontrol room is known as an

    "Outside Broadcasting Van", "OB Van" or "Scanner". Signals from cameras and

    microphones come into the OB Van for processing and transmission. The term "OB" is

    almost unheard of in the United States, where "mobile unit" and "production truck" are more

    often used

    A typical OB Van is usually divided into 5 parts.

    y The first and largest part is the production area where the director, technical director,assistant director, character generator operator and producers usually sit in front awall of monitors. This area is very similarto a Production control room. The technical

    directorsits in front of the video switcher. The monitors show all the video feeds from

    various sources, including computer graphics, cameras, video tapes, or slow motion

    replay machines. The wall of monitors also contains a preview monitor showing what

    could be the next source on air (does not have to be depending on how the video

    switcheris set up) and a program monitorthat shows the feed currently going to air or

    beingrecorded.

    y The second part of a van is for the audio engineer; it has a sound mixer (being fedwith all the various audio feeds: reporters. commentary, on-pitch microphones, etc.

    The audio engineer can control which channels are added to the output and will

    follow instructions from the director. The audio engineer normally also has a dirty

    feed monitorto help with the synchronization of sound and video.

    y The3rd part of the van is video tape. The tape area has a collection of video tapemachines (VTRs) and may also house additional powersupplies orcomputer

    equipment.

    y The4th part is the video control area where thecameras arecontrolled by 1 or2people to make sure that the iris is at thecorrect exposure and that all thecameras

    look the same.

  • 8/2/2019 Training Report Final - Copy (1)

    24/44

    y The5th part is transmission where the signal is monitored by and engineered forquality control purposes and is transmitted orsent to othertrucks.

    3.18 Video Switcher

    A video switcheris multi contact crossbarswitch matrix with provision forselecting any oneormore out of large no. of inputs and switching them on to out goingckts.The input sourcesource includes Cameras,V.T.R, and Telecine Machine outputs, besides test signal andspecial effects generators.

    3.19 Video Editing

    The term video editingcan referto:

    y non-linearediting system, usingcomputers with video editing softwarey linearvideo editing, using videotape

    Video editing is the process ofre-arranging or modifying segments of video to form another

    piece of video. Thegoals of video editing are the same as in film editing theremoval of

    unwanted footage, the isolation of desired footage, and the arrangement of footage in time to

    synthesize a new piece of footage

    Clips are arranged on a timeline, music tracks and titles are added,effects can becreated, and

    the finished program is "rendered" into a finished video.

    Non Linear Editing

    The term "nonlinear editing" is also called "real

    time" editing, "random-access" or "RA" editing,

    "virtual" editing, "electronic film" editing, and so

    on.

  • 8/2/2019 Training Report Final - Copy (1)

    25/44

    Non-linear editing for film and television postproduction is a modern editing method which

    involves being able to access any frame in a video clip with the sameease as any other. This

    method is similar in conce pt to the "cut and glue" technique used in film editing from the

    beginning. However, when working with film, it is a destructive process, as the actual film

    negative must be cut. Non-linear, non-destructive methods began to appear with theintroduction of digital video technology.

    Video and audio data are first digitized to hard disks orotherdigital storage devices. The data

    is either recorded directly to the storage device or is imported from another source. Once

    imported they can beedited on a computerusing any of a widerange of software.

    With the availability of commodity video processing specialist video editing cards, and

    computers designed specifically for non-linear video editing, many software packages are

    now available to work with them

    some popularsoftwares used forN.L.E are

    1.AdobePremiere Elements (Microsoft Windows)

    2.Final Cut Express

    3.Leitch Velocity

    4. Media 100

    5. Nero 7Premium

    6.Windows Movie Maker

    Linear Editing

    It is done by using VCR and using monitor to see the output ofediting.

    3.20 Graphics

    The paint-box is a professional tool forgraphics designer. Using an electronics curserorpen

    and a electronics board, by paint box any type of design can becreated.

    An artist can capture any live video-frame and retouch it and subsequently process,cut or

    paste it on anotherpicture and prepare a stencil out of thegrabbed picture.

    The system consists of:

  • 8/2/2019 Training Report Final - Copy (1)

    26/44

    Mainframeelectronics, a graphics table, a keyboard, a floppy disk drive,385MB Winchester

    disk drive.

    3.21 Electronics News Gathering

    It basically comes underthe outside broadcasting. ENG may be live orrecorded type. There

    are two types of professional video cameras: High End Portable, Recording Cameras

    (essentially, high-end camcorders) used forENG and studio cameras which lack the

    recordingcapability of a camcorder, and are often fixed on studio pedestals.Portable

    professional cameras aregenerally much larger than consumercameras and are designed to

    becarried on the shoulder.

    4. MICROPHONES

    A microphone (colloquially called a mic ormike; both pronounced) is an acoustic-to-

    electric transducerorsensor that converts sound into an electrical signal. In 1876, Emile

    Berlinerinvented the first microphone used as a telephone voice transmitter. Microphones are

    used in many applications such as telephones, taperecorders, karaoke systems, hearing aids,

    motion picture production, live and recorded audio engineering, FRSradios, megaphones, in

    radio and television broadcasting and in computers forrecording voice, speech recognition,

    VoIP, and fornon-acoustic purposes such as ultrasonicchecking orknock sensors.

    Most microphones today useelectromagnetic induction (dynamic microphone),capacitance

    change (condensermicrophone), piezoelectricgeneration, or light modulation to produce an

    electrical voltage signal from mechanical vibration.

    The sensitive transducerelement of a microphone is called its elementorcapsule. A complete

    microphone also includes a housing, some means of bringing the signal from theelement tootherequipment, and often an electroniccircuit to adapt the output of thecapsule to the

    equipment being driven. Microphones arereferred to by theirtransducerprinciple, such as

    condenser, dynamic,etc., and by theirdirectional characteristics.Sometimes other

    characteristics such as diaphragm size, intended use ororientation of the principal sound

  • 8/2/2019 Training Report Final - Copy (1)

    27/44

    input to the principal axis (end- orside-address) of the microphone are used to describe the

    microphone.

    4.1 Condenser microphone

    Inside the Oktava 319condensermicrophone

    Thecondenser microphone, invented at Bell Labs in 1916 by E. C. Wente[2] is also called a

    capacitor microphone orelectrostatic microphone.Here, the diaphragm acts as one plate

    of a capacitor, and the vibrations producechanges in the distance between the plates. There

    are two types, depending on the method ofextracting the audio signal from the transducer:

    DC-biased and radio frequency (RF) orhigh frequency (HF)condensermicrophones. With a

    DC-biased microphone, the plates are biased with a fixed charge (Q). The voltage maintained

    across thecapacitor plates changes with the vibrations in the air, according to thecapacitance

    equation (C = Q / V), where Q = charge in coulombs, C = capacitance in farads and V =

    potential difference in volts. Thecapacitance of the plates is inversely proportional to the

    distance between them fora parallel-platecapacitor. (Seecapacitance fordetails.) The

    assembly of fixed and movable plates is called an "element" or"capsule."

    A nearly constant charge is maintained on thecapacitor. As thecapacitancechanges, the

    charge across thecapacitordoes change very slightly, but at audible frequencies it is sensibly

    constant. Thecapacitance of thecapsule (around 5 to 100 pF) and the value of the bias

    resistor(100 megohms to tens ofgigohms) form a filter that is high-pass forthe audio signal,

    and low-pass forthe bias voltage. Note that the timeconstant of an RC circuit equals the

    product of theresistance and capacitance.

    Within the time-frame of thecapacitancechange (as much as 50 ms at 20 Hz audio signal),

    thecharge is practically constant and the voltage across thecapacitorchanges instantaneously

  • 8/2/2019 Training Report Final - Copy (1)

    28/44

    to reflect thechange in capacitance. The voltage across thecapacitorvaries above and below

    the bias voltage. The voltage difference between the bias and thecapacitor is seen across the

    series resistor. The voltage across theresistoris amplified forperformance orrecording.

    AKG C451B small-diaphragm condensermicrophone

    4.2 Dynamic microphone

    Dynamic microphones work via electromagnetic induction. They arerobust,relatively

    inexpensive and resistant to moisture. This,coupled with their potentially high gain before

    feedback makes them ideal foron-stage use.

    Moving-coil microphones use the same dynamic principle as in a loudspeaker, only reversed.

    A small movable induction coil, positioned in the magnetic field of a permanent magnet, is

    attached to the diaphragm. When sound enters through the windscreen of the microphone, the

    sound wave moves the diaphragm. When the diaphragm vibrates, thecoil moves in the

    magnetic field, producing a varyingcurrent in thecoil through electromagnetic induction. A

    single dynamic membrane does not respond linearly to all audio frequencies.Some

    microphones for this reason utilize multiple membranes for the different parts of the audio

    spectrum and then combine theresulting signals. Combining the multiple signals correctly is

    difficult and designs that do this arerare and tend to beexpensive. There are on the other

    hand several designs that are more specifically aimed towards isolated parts of the audio

    spectrum. The AKG D 112, forexample, is designed forbass responseratherthan treble. In

    audio engineering several kinds of microphones are often used at the same time to get the

    best result.

    4.3 Ribbon Microphone

    Ribbon microphones use a thin, usually corrugated metal ribbon suspended in a magnetic

    field. Theribbon is electrically connected to the microphone's output, and its vibration within

  • 8/2/2019 Training Report Final - Copy (1)

    29/44

    the magnetic field generates theelectrical signal. Ribbon microphones are similarto moving

    coil microphones in the sense that both produce sound by means of magnetic induction. Basic

    ribbon microphones detect sound in a bi-directional (also called figure-eight) pattern because

    theribbon, which is open to sound both front and back,responds to the pressuregradient

    ratherthan the sound pressure. Though the symmetrical front and rearpickup can be anuisance in normal stereo recording, the high siderejection can be used to advantage by

    positioning a ribbon microphone horizontally, forexample abovecymbals, so that therear

    lobe picks up only sound from thecymbals. Crossed figure8, or Blumlein pair, stereo

    recording is gaining in popularity, and the figure8response of a ribbon microphone is ideal

    forthat application.

    4.4 Carbon microphone

    A carbon microphone, also known as a carbon button microphone (orsometimes just a button

    microphone), use a capsule orbutton containingcarbon granules pressed between two metal

    plates like the Berlinerand Edison microphones. A voltage is applied across the metal plates,

    causing a small current to flow through thecarbon. One of the plates, the diaphragm, vibrates

    in sympathy with incident sound waves, applying a varying pressure to thecarbon. The

    changing pressure deforms thegranules,causing thecontact area between each pairof

    adjacent granules to change, and this causes theelectrical resistance of the mass ofgranules

    to change. Thechanges in resistancecause a correspondingchange in thecurrent flowingthrough the microphone, producing theelectrical signal. Carbon microphones were once

    commonly used in telephones; they haveextremely low-quality sound reproduction and a

    very limited frequency responserange, but are very robust devices. The Boudet microphone,

    which used relatively largecarbon balls, was similarto thegranulecarbon button

    microphones.

    Unlike othermicrophone types, thecarbon microphonecan also be used as a type of

    amplifier, using a small amount of sound energy to control a largeramount ofelectrical

    energy. Carbon microphones found use as early telephonerepeaters, making long distance

    phonecalls possible in theera before vacuum tubes. Theserepeaters worked by mechanically

    coupling a magnetic telephonereceiverto a carbon microphone: the faint signal from the

    receiverwas transferred to the microphone, with a resulting strongerelectrical signal to send

    down the line. One illustration of this amplifiereffect was the oscillation caused by feedback,

  • 8/2/2019 Training Report Final - Copy (1)

    30/44

    resulting in an audible squeal from the old "candlestick" telephone if its earphone was placed

    nearthecarbon microphone.

    4.5 Piezoelectric microphone

    A crystal microphone orpiezo microphone uses the phenomenon of piezoelectricity the

    ability of some materials to produce a voltage when subjected to pressure to convert

    vibrations into an electrical signal. An example of this is potassium sodium tartrate, which is

    a piezoelectriccrystal that works as a transducer, both as a microphone and as a slimline

    loudspeakercomponent. Crystal microphones were oncecommonly supplied with vacuum

    tube (valve)equipment, such as domestic taperecorders. Theirhigh output impedance

    matched the high input impedance (typically about 10 megohms) of the vacuum tube input

    stage well. They were difficult to match to early transistorequipment, and werequickly

    supplanted by dynamic microphones fora time, and latersmall electret condenserdevices.

    The high impedance of thecrystal microphone made it very susceptible to handling noise,

    both from the microphone itself and from theconnectingcable.

    Piezoelectric transducers are often used as contact microphones to amplify sound from

    acoustic musical instruments, to sense drum hits, fortriggeringelectronic samples, and to

    record sound in challengingenvironments, such as underwaterunderhigh pressure.Saddle-

    mounted pickups on acousticguitars aregenerally piezoelectric devices that contact the

    strings passing overthe saddle. This type of microphone is different from magneticcoil

    pickups commonly visible on typical electricguitars, which use magnetic induction,rather

    than mechanical coupling, to pick up vibration.

    4.6 Fiber optic microphone

    The Optoacoustics 1140 fiberoptic microphone

  • 8/2/2019 Training Report Final - Copy (1)

    31/44

    A fiberoptic microphoneconverts acoustic waves into electrical signals by sensingchanges

    in light intensity, instead of sensingchanges in capacitance ormagnetic fields as with

    conventional microphones.

    During operation, light from a lasersource travels through an optical fiberto illuminate thesurface of a tiny, sound-sensitivereflective diaphragm.Sound causes the diaphragm to

    vibrate, thereby minutely changing the intensity of the light it reflects. The modulated light is

    then transmitted overa second optical fiberto a photo detector, which transforms the

    intensity-modulated light into analog ordigital audio fortransmission orrecording. Fiber

    optic microphones possess high dynamic and frequency range, similarto the best high fidelity

    conventional microphones.

    Fiberoptic microphones do not react to orinfluence any electrical, magnetic,electrostatic or

    radioactive fields (this is called EMI/RFI immunity). The fiberoptic microphone design is

    therefore ideal foruse in areas whereconventional microphones are ineffective ordangerous,

    such as inside industrial turbines orin magneticresonance imaging (MRI)equipment

    environments.

    Fiberoptic microphones arerobust,resistant to environmental changes in heat and moisture,

    and can be produced forany directionality or impedance matching. The distance between the

    microphone's light source and its photo detectormay be up to several kilometers without

    need forany preamplifierand/orotherelectrical device, making fiberoptic microphones

    suitable for industrial and surveillance acoustic monitoring.

    Fiberoptic microphones are used in very specific application areas such as for infrasound

    monitoring and noise-canceling. They have proven especially useful in medical applications,

    such as allowingradiologists, staff and patients within the powerful and noisy magnetic field

    to converse normally, inside the MRI suites as well as in remotecontrol rooms.[10]) Other

    uses include industrial equipment monitoring and sensing, audio calibration and

    measurement, high-fidelity recording and law enforcement.

    4.7 Laser microphone

    Lasermicrophones are often portrayed in movies as spy gadgets. A laserbeam is aimed at the

    surface of a window or otherplane surface that is affected by sound. The slight vibrations of

  • 8/2/2019 Training Report Final - Copy (1)

    32/44

    this surface displace thereturned beam,causing it to trace the sound wave. The vibrating

    laserspot is then converted back to sound. In a morerobust and expensive implementation,

    thereturned light is split and fed to an interferometer, which detects movement of the surface.

    The formerimplementation is a tabletop experiment; the latterrequires an extremely stable

    laserand precise optics.

    A new type of lasermicrophone is a device that uses a laserbeam and smoke orvaporto

    detect sound vibrations in free airSound pressure waves cause disturbances in the smoke that

    in turn cause variations in the amount of laserlight reaching the photo detector.

    4.8 Speakers as microphones

    A loudspeaker, a transducerthat turns an electrical signal into sound waves, is the functional

    opposite of a microphone.Since a conventional speakeris constructed much like a dynamic

    microphone (with a diaphragm,coil and magnet), speakers can actually work "in reverse" as

    microphones. Theresult, though, is a microphone with poorquality, limited frequency

    response (particularly at the high end), and poorsensitivity. In practical use, speakers are

    sometimes used as microphones in applications where high quality and sensitivity are not

    needed such as intercoms, walkie-talkies orVideo game voicechat peripherals, orwhen

    conventional microphones are in short supply.

    Microphones, however, are not designed to handle the powerthat speakercomponents are

    routinely required to cope with. One instance of such an application was theSTC

    microphone-derived 4001 super-tweeter, which was successfully used in a numberof high

    quality loudspeakersystems from the late 1960s to the mid-70s.

    5. Composite Video Signal and Television Standards

    5.1 Composite video signal

  • 8/2/2019 Training Report Final - Copy (1)

    33/44

    The composite video signal is formed by the electrical signal corresponding to the picture

    information in the lines scanned in the TV camera pick-up tube and the synchronizing signals

    introduced in it. It is important to preserve its waveform as any distortion of the video signal

    will affect the reproduced picture, while a distortion in the sync pulses will affect

    synchronization resulting in an unstable picture. The signal is, therefore, monitored with the

    help of an oscilloscope, at various stages in the transmission path to conform with the

    standards. In receivers, observation of the video signal waveform can provide valuableclues

    to circuit faults and malfunctions

    Composite video is the format of an analog television (picture only) signal before it is

    combined with a sound signal and modulated onto an RF carrier.

    It is usually in a standard format such as NTSC,PAL, orSECAM. It is a composite of three

    source signals called Y, U and V (together referred to as YUV) with sync pulses. Y

    represents the brightness or luminance of the picture and includes synchronizing pulses, so

    that by itself it could be displayed as a monochrome picture. U and V between them carry the

    colour information. They are first mixed with two orthogonal phases of a colourcarriersignal

    to form a signal called the chrominance. Y and UV are then added together. Since Y is a

    base band signal and UV has been mixed with a carrier, this addition is equivalent to

    frequency-division multiplexing.

    5.2 Colorburst

    In composite video,colorburst is a signal used to keep thechrominance subcarrier

    synchronized in a color television signal. By synchronizing an oscillatorwith thecolorburst

  • 8/2/2019 Training Report Final - Copy (1)

    34/44

    at the beginning ofeach scan line, a television receiveris able to restore the suppressed

    carrierof thechrominance signals, and in turn decode thecolorinformation.

    5.3 Television Broadcast Channels

    Fortelevision broadcasting,channels have been assigned in the VHF and UHF ranges. The

    allocated frequencies are:

    (Band II 88-108 MHz is allotted forFM broadcasting.)

    Thechannel allocations in band I and band III aregiven in table There are fourchannels in

    band I, of which channel (6 MHz) is no longer used for TV broadcasting, being assigned to

    otherservices.

    5.4 Broadcasting of TV Programs

    The public television service is operated by broadcasting picture and sound from picture

    transmitters and associated sound transmitters in three main frequency ranges in the VHF and

    RANGE BAND FRQUENCY

    LowerVHF range Band I 41-68 MHzUpperVHF range Band III 174-230 MHzUHF range Band IV 470-582 MHzUHF range Band V 606-790 MHz

    TELEVISION CHNNEL ALLOCATIONS

    Cha

    Frequency range, Picture carrier, MHz Sound

    1 41- 47 Not used forTV

    2 47- 54 48.25 53.75

    3 54- 61 55.25 60.75

    4 61- 68 62.25 67.75

    5 174-181 175.25 180.75

    6 181-188 182.25 187.75

    7 188-195 189.25 194.75

    8 195-202 196.25 201.75

    9 202-209 203.25 208.75

    10 209-216 210.25 215.75

    11 216-223 217.25 222.75

  • 8/2/2019 Training Report Final - Copy (1)

    35/44

    UHF bands. By international ruling of the ITD, these ranges are exclusively allocated to

    television broadcasting.Subdivision into operatingchannels and theirassignment by location

    are also ruled by international regional agreement. Thecontinental standards are valid as per

    the CCIR 1961 Stockholm plan. The details of the various system parameters are as follows.

    5.5 Types of Modulation

    Vision:C3F (vestigial sideband AM)

    The saving of frequency band is about 40%; the polarity is negative because of the

    susceptibility to interference of the synchronizingcircuits of early TV receivers (exception:

    positive modulation};residual carrierwith negative modulation 10% (exception 20%).

    Sound:F3E;PM forbetterseparation from vision signal in thereceiver(exception: AM).

    Sound carrier above vision carrier within RF channel, inversion at IF; (exception: standards

    A, E and, in part,L ).

    5.6 Vestigial Sideband Transmission

    Band Frequency Channel Bandwidt

    I (41)47 to 68 MHz 2 to 4 7 MHz

    II 87.5 (88) to 108 MHz VHF PM soundIII 174 to 223 (230) MHz 5 to 11(12) 7 MHz

    IV 470 to 582 MHz 21 to 27 8 MHzV 582 to 790 (860) MHz 28 to 60 (69) 8 MHz

    VI 11.7 to 12.5 GHz superseded by

    Special 68 to 82 (89) MHz 2 (3)S 7 MHz

    channels 104 to 174 and channelsCable TV 230 to 300 MH SI to S20 7 MHz

    Vestigial sideband ratios: Systems

    0.75/4.2 MHz = 1:5.6 M 525/60,6MHz0.75/5.0 MHz = 1:6.7 B625/50,7MHz

    1.25/5.5 MHz = 1:4.4 I 625/50,8MHz

  • 8/2/2019 Training Report Final - Copy (1)

    36/44

    In the video signal very low frequency modulating components exist along with the rest of

    the signal. Thesecomponents giverise to sidebands very close to thecarrierfrequency which

    are difficult to remove by physically realizable filters. Thus it is not possible to go to the

    extreme and fully suppress onecomplete sideband in thecase of television signals. The low

    video frequencies contain the most important information of the picture and any effort tocompletely suppress the lower side band would result in objectionable phase distortion at

    these frequencies. This distortion will be seen by theeye as 'smear' in thereproduced picture.

    Therefore, as a compromise, only a part of the lowersideband is suppressed, and theradiated

    signal then consists of a full upper side band together with the carrier, and the vestige

    (remaining part) of the partially suppressed lower sideband. This pattern of transmission of

    the modulated signal is known as vestigial side band or A5C transmission. In the 625 line

    system, frequencies up to0.75 MHz in the lowersideband are fully radiated. The net result is a

    normal double sideband transmission for the lower video frequencies corresponding to the

    main body of picture information.

    As stated earlier, because of filter design difficulties it is not possible to terminate the

    bandwidth of a signal abruptly at theedges of the sidebands. Therefore, an attenuation slope

    covering approximately 0.5 MHz is allowed eitherend.

    Any distortion at the higher frequency end, if attenuation Slopes were not allowed, would

    mean a serious loss in horizontal detail,

    Since the high frequency components of the video modulation determine the amount of

    horizontal detail in the picture. Fig illustrates the saving of band.Space. which results from

  • 8/2/2019 Training Report Final - Copy (1)

    37/44

    vestigial sideband transmission. Thepicture signal is seen to occupy a bandwidth of6.75

    MHz instead of 11MHz.

    5.7 Digital Coding of Colour TV Video Signals and Sound Signals

    National and international organizations are attempting at present to establish a uniform

    digital coding standard orat least an optimal compromise for the TV studio and transmission

    on the basis of CCIR601 recommendation fordigital interfaces.

    5.8 T.V studioThe(West European) EBU has prepared the followingDigital codingstandardforvideo

    signals:yComponent coding (Y signal and two colour-difference signals);ySampling frequencies fsample in theratio 4 : 2 : 2 with 13.5 MHz (3x f

    chrominance) for the luminancecomponent and 6.75 MHz (2xf chrominance) for

    each chrominancecomponent;

    yQuantization qis 8 bits/amplitude value;yData flow/channel

    13.5x 106 values/s x8 bits/amplitude value

    =2 (6.75 X 106 values/s x8 bits/amplitude

    value) =108 Mbits/s.

    108 Mbits/s + 2x54 Mbis/s =216 Mbits/s.

    I.e. therequired bandwidth is approximately 100 MHz.

    5.9 Transmission

    This high channel capacity can only be achieved with internal studio links via coaxial cablesorfiberoptics. The publiccommunications networks of present-day technology, the limits per

    channel lie at the hierarchical step of34 Mbits/s formicrowave links, later140 Mbits/s.

    Thereforegreat attempts are being made at reducing the bit rate with the aim of achieving

    satisfactory picturequality with 34 Mbits/s perchannel.

    Terrestrial TV transmitters and coaxial coppercable networks are unsuitable for digital

  • 8/2/2019 Training Report Final - Copy (1)

    38/44

    transmissions.5-EEllites with carrierfrequencies of about 20 GHz and above may be used.

    The digital coding of sound signals forsatellite sound broadcastingand for thedigital sound

    studio is moreelaborate with respect to quantizing than forvideo signals.

    A quantization q of 16 Bits/amplitude value is required to obtain a quantizing signal-to-noise

    ratio S/Nq of 98 dB,

    [S/Nq=96 + 2) dB].

    The sampling frequency must follow the sampling theoremfsample =/> 2x fmax,where fmax

    is the maximum frequency of the base band.

    6. SATELLITE COMMUNICATION

    Television could not exist in its contemporary form without satellites.Since 10 July 1962,

    when NASA technicians in Maine transmitted fuzzy images of themselves to engineers at a

    receiving station in England using the Telstarsatellite, orbitingcommunications satellites

    have been routinely used to delivertelevision news and programming between companies

    and to broadcasters and cable operators. And since the mid-1980s they have been

    increasingly used to broadcast programming directly to viewers, to distribute advertising, and

    to provide live news coverage.

    6.1 Direct Broadcasting Satellites

    F-sample Quantization q Data flow/chl

    32 KHz 16 bits 512Kbits/secDigital sound studio Up to 48 KHz 16 bits 768Kbits/sec

  • 8/2/2019 Training Report Final - Copy (1)

    39/44

    6.1.1 Geostationary Orbit

    As indicated in Section 7.12, satellites orbiting at a height of about 36,000 km from theearth,

    at an orbital speed of about 3 km/s (11,000 km/h)can act as a geostationary satellite, when

    thecentrifugal force acting on the satellite just balances thegravitational pull of theearth.

    IfMis the mass of theearth, m is the mass of the satellite,r, theradius of the orbit, and G,

    thegravitational

    Constant, we have thecentrifugal force,

    mv2 = Mm *G

    r r2

    v = (GM/r)

    T = orbital period of the satellite = 24 hrs.

    = 2**r/v

    = 24*3600 seconds

    Put: M = 5.974 X 1024 kg,

    G = 6.6672 X 10-11

    Gives the orbital radius of a synchronous satellite as 42164 km.Deducting theradius ofearth

    equal to 6378 km, the distance from earth surface will be35786 km.

    6.1.2 Footprints

    As the satelliteradio beam is aimed towards theearth, it illuminates on theearth an oval

    service area,called the 'footprint'. Because of slant illumination of theearth by theequatorial

    satellite, this is actuallyan egg-shaped area with the sharperside pointing towards the pole.

    The size of the footprint depends on how greatly the beam spreads on to the surface of the

    earth intercepted by it. The foot prints forcontours of3 dB orhalf powerbeam width are

    usually considered. The beam width planning depends on the angle of incidence of the beam

    on theearth orthe angleelevation of the satellite. It can be directly controlled by the size of

    the on-board parabolic antenna.Present day launchers can carry antennas of around 3m,

  • 8/2/2019 Training Report Final - Copy (1)

    40/44

    giving a minimum beam widthof about 0.6. With difficulties in accurate station-keeping, it

    is p ru d e nt to allow fora margin of around 0.10 when planning the footprint to covera

    country.Some satelliteemploy additional antennas to emit spot beams that coverregions

    beyond the normal oval shape theslant range of a satellite involves calculation of the

    distances from thebore sight pointof the beam,covered by the semi-beam width angle,considering thegeometry of footprint

    6.1.3 Beam Width

    Theradiation pattern from a parabolic dish can becalculated from theequation:

    E = J1 {(.D/). sin}*{ 2. /.D sin}

    WhereJl = Bessel function of first order

    D =diameterof the parabolic dish,

    =angle of direction with respect to the principle axis of the antenna aperture.

    The expression (argument) within the bracket is evaluated and the Bessel function obtained

    for the argument, from Bessel function tables/graphs. The values of the argument for whichthe Bessel function becomes zero, will be found to be 3.8, 10.3 and 13.5. The ang1e of the

    radiation pattern where the first null occurs is given by

    J1 {(.D/). sin}*{ 2. /.D sin}=J1(3.8)

    Which gives:

    sin =3.8Aln .D =1.22/D, and hence

    =1.22AID radians, or

    =70(/D) degrees

    The main lobe ofcircular dish lies within the angle between the first nulls on eitherside or

  • 8/2/2019 Training Report Final - Copy (1)

    41/44

    is given by twice this angle 0.

    The3 dB beamwidth of the main lobe is given by the half lobe angle

    3Db=58(/D)

    It may be observed that the antenna gain is inversely proportional to square of the beam

    width. That is, a decrease of the beam width by a factor of 2 obtained by doubling the

    diameterof the dish increases the antenna gain by a factorof4 (6 dB).

    6.2 Earth Station

    An earth station or ground station is the surface-based (terrestrial) end of a

    communications link to an object in outer space. The space end of the link is occasionallyreferred to as a space station (though that term often implies a human-inhabited complex).

    The majority of earth stations are used to communicate with communications satellites, and

    are called satellite earth stations or teleports, but others are used to communicate with

    space probes, and manned spacecraft. Where thecommunications link is used mainly to carry

  • 8/2/2019 Training Report Final - Copy (1)

    42/44

    telemetry or must follow a satellite not in geostationary orbit, the earth station is often

    referred to as a tracking station.

    satellite earth station is a communications facility with a microwaveradio transmitting and

    receiving antenna and required receiving and transmittingequipment forcommunicating withsatellites (also known asspace stations).

    Many earth station receivers use the double superhet configuration shown in which has two

    stages of frequency conversion. The front end of thereceiver~~ mounted behind the antenna

    feed and converts the incoming RF signals to a first IF in therange900 to 1400 MHz. This

    allows the receiver to acce pt all the signals transmitted from a satellite in a 500-MHz

    bandwidth at C band or Ku band, for example. The RF amplifier has a high gain and the

    mixer is followed by a stage of IF amplification. This section of thereceiver is called a low

    noise blockconverter (LNB). The900-1400 MHz signal is sent overa coaxial cable to a set-

    top receiver that contains another down-converter and a tunable local oscillator. The local

    oscillator is tuned to convert the incoming signal from a selcted transponder to a second IF

    frequency. The second IF amplifier has a bandwidth matched to the spectrum of the

    transpondersignal.Direct broadcast satellite TV receivers at Ku band use this approach; with

    a second IF filterbandwidth of20 MHz.

    BLOCK-DIAGRAM OF AN EARTHSTATION

    6.3 UPLINKING AND DOWNLINKING

  • 8/2/2019 Training Report Final - Copy (1)

    43/44

    A satellite receives television broadcast from a ground station. This is termed as Up

    linking because the signals are sent up from the ground to the satellite. These signals are

    then broadcast down overthe footprint area in a process called Down linking.

    To ensure that the uplink and downlink signals do not interfere with each other,separate frequencies are used for uplinking and downlinking.

    DOWNLINK FREQ

    (GHz)

    UP-LINK FREQ (GHz)

    S BAND 2.555 to 2.635 5.855 to 5.935

    Extended C Band

    (Lower)

    3.4 to 3.7 5.725 to 5.925

    C Band 3.7 to 4.2 5.925 to 6.425

    Extended C Band

    (Upper)

    4.5 to 4.8 6.425 to 7.075

    Ku Band 10.7 to 13.25 12.75 to 14.25

    Ka Band 18.3 to 22.20 27.0 to 31.00

    6.4 Transponder s

  • 8/2/2019 Training Report Final - Copy (1)

    44/44

    The word transponder is coined from transmitter-responder and it refers to the

    eq uipment channel through th e sa tellite that conn ec ts thereceiv e ant enna with the

    transmit antenna. The transponder itself is not a single unit of equipment , bu t

    consists of some units that are commo n to all transponderchannels and others that

    can be id entified with a particularchannel.

    7. CONCLUSION

    Doordarshan and its services are available today in all over India via highly advanced

    networking facilities and technology tie-ups with satellite and cable system operators.

    It has been at the forefront in the use of satellite networking, secure encryption, subscriber

    management services and call centre technologies.

    It is one of the oldest and fastest growing industries in communication. Till the date it has

    provide its most valuable service to people not only in India but across the globe. And it

    promises to move ahead and continue its hard work in the same manner.