43
Principles for the Prediction of Video Decoding Times applied to MPEG-1/2 and MPEG-4 Part 2 Video Michael Roitzsch <[email protected]> Operating Systems Group Computer Science Department Technische Universit¨ at Dresden 13th June 2005 tutored by Martin Pohlack <[email protected]>, department headed by Prof. Dr. rer. nat. Hermann H¨ artig <[email protected]>

Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

Principles for the Prediction of VideoDecoding Times applied to MPEG-1/2 and

MPEG-4 Part 2 Video

Michael Roitzsch <[email protected]>Operating Systems Group

Computer Science DepartmentTechnische Universitat Dresden

13th June 2005

tutored by Martin Pohlack <[email protected]>,department headed by Prof. Dr. rer. nat. Hermann Hartig

<[email protected]>

Page 2: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2
Page 3: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

Contents

In this article, I will discuss a method to predict per-frame decoding times of moderndecoder algorithms. By examining especially the MPEG-1, MPEG-2 and MPEG-4 part 2algorithms, I will develop a generic model for these decoders, which also applies toa wide range of other decoders. From this model, I will derive a method to predictdecoding times with a reasonable quality while keeping overhead low. The effectivenessof this method will be shown by an example implementation and comparing the resultingpredictions with the actual decoding times. A brief outlook on future application andextension of these results concludes the paper.

Contents

1 Introduction 41.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Decoder Algorithms to consider . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Related Work 72.1 Outside Results being used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Comparison to other Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Decoder Analysis 83.1 Decoder Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.2 Current Decoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.3 Metrics for Decoding Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Numerical Background 234.1 QR Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2 Householder’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.3 Refining the Result by Column Dropping . . . . . . . . . . . . . . . . . . . . . . . 27

5 Implementation 295.1 Metrics Accumulation and LLSP Solver . . . . . . . . . . . . . . . . . . . . . . . 295.2 Metrics Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.3 Integration into VERNER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.4 Frame Reordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Results 336.1 Prediction Accuracy and Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . 336.2 Remaining Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3

Page 4: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

1 Introduction

1 Introduction

With increasingly demanding media applicationsemerging, like viewing or even creating and edit-ing high definition content (HDTV), the imma-nent limitations of today’s media software andunderlying operating systems move into focus:Lacking realtime properties, these applicationscan only be dealt with by using application-specific dedicated hardware, overprovisioningof resources, or arbitrary and thus ungracefuldegradation of playback quality. To allow futuresystems to overcome these difficulties, realtimeand QoS considerations should be applied [1, 2]to allow close to average-case resource alloca-tion and graceful degradation of quality.

For example, a video player whose decodingcomponent cannot keep up with the displayspeed and therefore has to skip the decodingof frames might want to prefer skipping frameswith a high decoding time. Another possibleapplication would be in a video editing system,where the user can stack multiple layers of videoand apply effects to them. This system needsto decide, which parts of the final edit can becalculated on the fly during playback and whichparts need to be prerendered. The prerenderinghas to include all critical parts to ensure suf-ficient playback quality, but at the same time,prerendering should be reduced to the minimumto keep the application responsive.

As most of these concepts require previousknowledge of the resource usage of the taskahead and because the primary task in the mediaapplications considered here is video decoding,with CPU time being the key resource, this workdiscusses a way to pre-determine per-frame de-coding times for recent video decoder technolo-gies. Obviously, this method must provide thedecoding time with lower resource usage thanthe actual decoding. It should also be possibleto calculate the decoding times on the fly foreach frame, because users cannot be expectedto provide precalculated trace data or wait forits collection. So we can deduce the followingrequirements for the prediction method:

• Provide per-frame decoding times consider-ably faster than the actual decoding takes.As little bitstream parsing and unpackingas possible should be done.

• Use data for this prediction only when it isavailable to the decoder at decoding timeof the frame, so it becomes possible to cal-culate the prediction on the fly, directly pre-ceding the actual decoding of each frame.

• Do not rely on extensive code review of de-coder implementations, because it quicklybecomes outdated as new decoders are re-leased or existing decoder code is being op-timized. Use statistical methods instead soour method can quickly adapt to a varietyof decoders.

• If this adaptation requires sample material,it should require material only roughly re-lated to the target application. Specifically,it should not be required to use the exacttarget video for adaptation.

1.1 Terminology

At first, I want to clarify the following terms,which are going to be used throughout the pa-per:

Decoder means the piece of software convert-ing compressed video data into raw imagedata, usually in a device-dependent col-orspace like YUV or, more rarely, RGB. Of-ten, the term decoder denotes a decodingalgorithm as well as its actual implementa-tion. This article mostly deals with imple-mentations, because the notion of decod-ing time is bound to actual software beingexecuted.

Frame decoding time or just decoding timeis the wallclock time it takes the decoderto parse and process, without being inter-rupted, enough of the compressed videodata to have one video image completely

4

Page 5: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

1.2 Decoder Algorithms to consider

assembled. Any further processing steps onthe image like enhancing its quality throughthe use of video filters or the presentationof the image on a graphics device do notaccount as decoding time. Note that thedecoder may not even emit the decoded im-age right after its completion due to framereordering.

Predicted frame decoding time or just pre-dicted time or prediction is the estimate forthe decoding time. The principles of deriv-ing this value from the compressed videodata is the subject of this work.

Decoder cycle time is the wallclock time thatpasses from control being given to the de-coder until the decoder returns this control.The decoder executed one of its internalprocessing cycles inbetween. Mostly, onecycle directly corresponds to one video im-age being completely decoded, so that de-coder cycle time and frame decoding timewould tally. However, this might not betrue for decoders with slice-based process-ing or frame reordering.

Predicted decoder cycle time is the estimatefor the decoder cycle time. I will show waysto derive it from the predicted frame decod-ing time.

Video frame or just frame is used as a syn-onym for video image in this article.

The problem of frame reordering depends on theimplementation of the decoder algorithm andwill be discussed in detail in Subsection 5.4.For the following general considerations, I wantto keep assumptions on actual implementationslow, so I will only examine the per-frame times.

1.2 Decoder Algorithms to consider

The selection of the MPEG-1/2 [6, 7] andMPEG-4 part 2 [8] decoder algorithms as thekey algorithms to analyze has been made forthe following reasons:

2002

2001

2000

1999

1998

1997

1996

1995

1994

Bit

Rate

(M

Bit/

s) re

qui

red

for b

roa

dc

ast

qu

alit

y

0

1

2

3

4

5

6

7

8

Figure 1: MPEG-2 encoder efficiency shown bybitrate need over the years [10].

• The decoder algorithms should have a widepublic acceptance so the results of my workare actually applicable to real-world prob-lems.

• The implementation should be proven andwell optimized, because early stages of de-coder software can have a different decod-ing time behavior.

• An open-source implementation should beavailable to allow easy insights into andreuse of existing code.

• Encoders for the algorithms should be effi-cient to get representative compressed ma-terial. Early stages of encoders often useonly a subset of the algorithm’s features,so they require higher data rates for thesame perceived image quality than moreadvanced encoders (demonstrated exem-plary on MPEG-2 in Figure 1). This behav-ior is not desirable, if you want future-proofresults.

MPEG-2 has been chosen because it has all thepreceding properties: Its presence on DVD andDVB makes it widely used. Thoroughly tested

5

Page 6: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

1 Introduction

open-source decoder software is available fromthe libmpeg2 project [20]. Sophisticated en-coders can be found in both the commercial sec-tor and the open-source community.

MPEG-1 is similar to MPEG-2, so it came nat-urally to regard MPEG-1/2 as one. Results forthe H.261 and H.2621 standards can be derivedeasily, because of their similarity to MPEG-1 andMPEG-2 respectively [12].

MPEG-4 part 2 is one video decoding algorithmspecified in the MPEG-4 standard family. It isnot as commonly used as MPEG-2, but withMPEG-4 part 10 [9] on the horizon, it seemednecessary to include a more recent algorithmthan MPEG-2. MPEG-4 part 10, also knownas AVC (advanced video coding) or H.264, ap-pears to become the next generation allroundformat. It scales from cell phones up to highdefinition video, is more efficient than MPEG-2and it uses integer transforms to allow fast im-plementations. Both the Blu-ray and HD-DVDgroups have adopted H.264, and it is the ma-jor format in Apple’s recently released Quick-Time 7 [13]. However, good open-source imple-mentations of H.264 decoders are not yet avail-able, sample content is hard to come by andsufficiently efficient encoders are not commonlyavailable even in the commercial market. Forthose reasons, H.264 is not one of the key algo-rithms I will examine, but I will comment on howmy ideas apply to H.264 in Subsection 3.2.3. Al-though H.264 is much more sophisticated thanMPEG-4 part 2, the latter is still closer to itthan MPEG-2, so it seems wise to go at leastpart of the way by having a deeper look intoMPEG-4 part 2. The H.263 standard is close tobeing a true subset of MPEG-4 part 2, so mostof the findings for MPEG-4 part 2 will also applyto H.263.

Other candidate algorithms were not includedfor various reasons:

1The MPEG-1/2/4 standards from the Moving Pic-ture Experts Group and the H.261/262/263/264standards from the International TelecommunicationUnion (ITU) overlap.

DV is widely used in digital camcorders, but thealgorithm is not of particular interest, becauseevery frame is self-contained, no inter-frame-redundancy is exploited [14]. This makes the al-gorithm well-suited for editing with frame gran-ularity, but the compression is not exceptionallyefficient.

The Windows Media algorithms, especiallyWMV9 are quite common today and good de-coder as well as encoder implementations areavailable from Microsoft. Although Microsofthas standardized WMV9 under the name VC-1, no open-source implementations are avail-able. Basically the same applies to the SorensonSVQ3 algorithm, which is used in most down-loadable QuickTime movie trailers. Efforts ofreverse engineering both WMV9 and SVQ3 haveshown evidence that both decoders are based onideas from H.264 [15].

The selected MPEG-1/2 and MPEG-4 part 2decoders exploit both intra- and inter-frame re-dundancy and visual perception deficiencies ofthe human eye to compress the video data. Thecommon way to do that is to blockwise trans-form the image on encoding from the spatialdomain into the frequency domain, because theperception model the encoders use can moreeasily exploit the redundancy there. This pro-cess has to be reversed during decoding, whichrequires numerous iterative calculations on thecompressed data to reconstruct the image. Thiscomplexity is the main reason why the decodingis CPU intensive and scheduling decisions canbenefit from knowing a prediction of the decod-ing times in advance.

1.3 Outline

My vision (see Figure 2) is to develop a pre-dictor, which is fed with only the compressedvideo data and produces estimates for the de-coder cycle time. The decoder itself should notbe modified in any way. The predictor should beas independent of the actual decoder implemen-tation as possible to allow future upgrades to

6

Page 7: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

2.1 Outside Results being used

unmodifieddecoderimplementation

predictor

compressed video

uncompressedvideo frames

predicteddecoder cycle time

Figure 2: The target architecture of this work.

newer decoder versions without any major mod-ifications to the predictor.

The basic idea I am going to explore is to breakdown the entire decoding of a frame into var-ious sub-tasks. This will be achieved by cre-ating a generic decoder model, into which allexamined decoders and hopefully a wide rangeof future decoders fit (see Subsection 3.1). Iwill then discuss the influence of each sub-taskon the decoding time in Subsection 3.3. I willpoint out characteristic properties of the streamthat correlate positively with the decoding time.I will then extract values from the compressedvideo stream that serve as metrics to quantifythe discussed properties. The overall per-framedecoding time will then be estimated by a lin-ear combination of the extracted metrics. Themathematics used to calculate the coefficientsof the linear combination will be explained inSection 4, followed by comments on the imple-mentation of the given principles in Section 5.An evaluation of the results and starting pointsfor future work conclude the article.

2 Related Work

2.1 Outside Results being used

My work is founded on previous results in theresearch areas of media player design, decoderalgorithms, and mathematics:

I have implemented my ideas using VERNER[19], a video encoder and player developed byCarsten Rietzschel. VERNER is easily extensi-ble by plugins and uses a multi-tier client-serverarchitecture to cleanly model the video playbackpipeline of demultiplexing, decoding, synchro-nizing and outputting by assigning these stepsto separate tasks. This design sufficiently de-couples the timing behavior of the decoder fromthe rest of the player, thus allowing me to useVERNER as an experimenting ground for mymeasurements and predictions.

VERNER itself is based on DROPS [17], theDresden realtime operating system, which is amultiserver microkernel-based system providingthe basic OS services like a filesystem or a graph-ical output mechanism. The Fiasco [18] L4 mi-crokernel in use is optimized for low communica-tion latency, which allows the VERNER client-server design to function efficiently. The real-time capability of the system is not exploited inmy work, but could become vital in using theresults to improve video playback behavior inresource constrained situations.

To measure the various times and display themgraphically I used the RT Mon framework, asoftware sensor system to gather, accumulate,distribute, and visualize arbitrary data. RT Monhas been developed by Martin Pohlack.

Because I want to extract characterizing val-ues from the video bitstreams, I required bit-stream parsers for the selected decoder algo-rithms MPEG-1/2 and MPEG-4 part 2. I reliedon the open-source implementations libmpeg2[20] and XviD [21], which are already used byVERNER for the actual decoding. I strippedthem down to the bitstream parsing level andalso reused code from their example applicationsto implement the bridging between the specificdecoder library API and the abstract predictionAPI I will provide in Section 5.

Finally, I am also building my work on resultsfrom mathematics, especially numerics. I usedwell known numerical techniques such as the QRdecomposition as well as established algorithms

7

Page 8: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

like the Householder transformation. These willbe explained in greater detail in Section 4.

2.2 Comparison to other Work

Decoding time prediction has been a researchsubject before, so previous results exist. Al-tenbernd, Burchard, and Stappert present ananalysis of MPEG-2 execution times in [3]. Thecommon idea behind their work and mine is togain metrics from the video stream that cor-relate well with the decoding time. However,there are considerable differences in both ap-proaches: Altenbernd, Burchard, and Stappertdivide the decoder in two phases, using data ex-tracted in the first phase to predict the decodingtime of the second phase. I want to avoid suchheavy modifications to existing decoder codeand rather provide a solution based on prepro-cessing. I also model the decoder as a sequenceof different phases, but my division will be morefine grained (see Figure 3). They also examineworst-case execution times, using source codeanalysis, to completely avoid any underestima-tion. I do not want to rely on the specific sourcecode, because the efforts of the analysis have tobe repeated when the code changes due to opti-mizations. This simplification allows me to gen-eralize my method to target more decoders thanjust MPEG-2. Because I cannot safely avoid un-derestimation with my approach, I have to set-tle for a best-effort estimation, but the resultsare still important because the quality assuringscheduling algorithm QAS [5] developed here atTU Dresden works on the basis of a probabil-ity distribution for execution times. It remainsan area of future research to derive worst-caseexecution times with my method.

Another similar analysis has been conducted byAndy Bavier, Brady Montz, and Larry L. Pe-terson in [4], but is also limited to MPEG, fo-cuses more on decoding times at the granularitylevel of network packets, and does not targettransferability of the results between differentvideos. Yet they also follow the path of predict-ing decoding times by extracting metrics from

the stream.

3 Decoder Analysis

3.1 Decoder Model

By looking into the inner workings and func-tional parts of the decoder algorithms in con-sideration, I established a decoder model by ab-stracting from the given algorithms. The modelis generic enough to be applicable to a widerange of other algorithms. The simple basicstructure of the model can be seen in Figure 3.It is a chain of function blocks that are executediteratively in loops. Because I am concentratingon execution times, the edges of the graph rep-resent control flow rather than any data flow.Control starts at the bitstream parsing blockand ends implicitly when the compressed datastream ends.

Every function block but the first can have mul-tiple alternative and completely separated ex-ecution paths in the same decoder algorithm.Those execution paths would be chosen amongby context data extracted from the compressedvideo stream, like a frame type. As a notablespecial case, the function blocks can have ano-operations execution path, essentially doingnothing. I will now go through the blocks andexplain each one and its input and output moreclosely. When it serves the understanding, I willalso mention the encoder. The decoding stan-dards, however, only specify the bitstream andthe decoder, leaving the details of efficient en-coder implementations to the market.

3.1.1 Bitstream Parsing

The bitstream parsing receives the compressedvideo stream for every frame and extracts meta-and context-information from the stream, whichis required to control the following decodingsteps. Because switches between the multipleexecution paths of the function blocks are donebased on the data extracted by this step, this

8

Page 9: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.1 Decoder Model

per−frame loop

bitstreamparsing

preparedecoder pression

decom− inversescan

per−macroblock loop

predictionspatial

predictiontemporal

inverseblocktransform

postprocessingprediction

coefficientinversequanti−zation

1 2 3 4 5 6 7

8 9

10

Figure 3: Chained block model for decoders

function block itself cannot have multiple, com-pletely separate execution paths as there wouldbe no way to switch between them. Of coursethe inner working of this block can still havedifferent control flows, but those can hardly beregarded as separated.

Bitstream parsing is nondestructive, so the datathis step outputs is unchanged from its input.This makes this function block reusable for ourapproach of extracting certain metrics from thestream that shall give an indication of the decod-ing time. Because the context data extracted bythe parser can change the control flow throughthe remaining function blocks tremendously, thisdecoder step is most likely required for any us-able decoding time estimation.

The context data the parser creates includes in-formation such as:

• Pixel resolution of the target video image,

• Other size information the decoder needsto allocate buffers for intermediate resultsin advance,

• Lookup tables used by following functionblocks,

• Frame type of the upcoming compressedvideo data.Most modern decoders use different frametypes to easily switch between different ex-ecution paths of following function blocks

like the temporal prediction. The algo-rithms want to exploit temporal redun-dancy, so they store only changes be-tween adjacent frames. However, not allframes can be treated this way, becauseto start in an arbitrary position inside acompressed video file, the decoder needsa self-contained frame without inter-framedependencies. This makes it necessary todismiss the temporal prediction for thoseframes, which is done by assigning a frametype to them.

3.1.2 Decoder Preparation

The preparation stage allocates any memorybuffers needed for decoding. When decoder al-gorithms exploit temporal redundancy, they of-ten work with previous images. Those may needto be copied or modified without disturbing ex-isting images, which must not be changed, be-cause they might not have been displayed yet.This may not be needed for all frames, so theexecution path of the decoder preparation canvary based on frame type.

Because this function block only paves the wayfor the following blocks, we can safely drop itin extracting our metrics, unless we need to ex-ecute one of the per-macroblock functions thatrely on it.

9

Page 10: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

3.1.3 Decompression

This function block is the first that is executedinside a per-macroblock loop. A macroblock isan area of the target image whose compresseddata is stored consecutively in the data streamand that is decoded in one iteration of the loop.For today’s decoder algorithms, macroblocks aresquare areas of 8×8 or 16×16 pixels.

The data needed to further decode the mac-roblock is stored using a lossless compressiontechnology like Huffman [22]. This compres-sion is reversed in this step and the intermedi-ate data is kept in a buffer for later steps toaccess. Because the compressed data for eachmacroblock usually varies in length, dependingon the size and entropy of the uncompresseddata, the decompression step also involves addi-tional bitstream parsing when walking over thecompressed stream from macroblock to mac-roblock. I decided to subsume decompressionand moving forward in the compressed streamas one function block, because their implemen-tations intermingle.

The decompression function can use tables fromthe bitstream parsing stage, if a Huffman tableor something equivalent is embedded into thecompressed video. There may even be decoderalgorithms switching between completely differ-ent decompression functions, because they maybe specifically adapted for certain types of videomaterial. The compression filters in the PNGimage format point in this direction [16]. Theyare applied to the byte stream right before theactual compression, and their effect varies withthe image content.

In addition to the macroblock data itself, eachmacroblock may come with metadata like amacroblock type or other information for fol-lowing decoder steps. As a special case, a mac-roblock might consist entirely of such contextdata and no compressed image data at all.

3.1.4 Inverse Scan

Because the resulting video image is two dimen-sional, the transforms used later in the compres-sion are two dimensional as well. However, thedecompressed macroblock we received from theprevious stage is a one dimensional list of bytes.Those need to be rearranged in a 2D matrix.Because this would partly countereffect the pre-ceding entropy compression step, this reorder-ing is not done in a line-by-line fashion, but in adifferent, often diagonal pattern. Different pat-terns may be chosen based on earlier informationabout either the entire frame or the macroblock.The copying into the 2D matrix can be imple-mented together with further decompression likerun length encoding.

3.1.5 Coefficient Prediction

Decoders can analyze the matrix to try to re-construct image features that have been com-pressed away. One approach to do that is todefine prediction functions that extrapolate ad-ditional data from existing data. The encoderwill use this function to determine which infor-mation the decoder will be able to reconstructwhen it is left out. Either by applying a sim-ple accuracy tolerance or by storing predictionhints or by storing the remaining error for thedecoder to apply to the prediction, the encoderdecides which data can be dropped without de-grading image quality too much. The encodermight even have different prediction functionsor prediction patterns at its discretion.

Because the following transform function com-monly uses each element of the macroblock ma-trix more than once, the coefficient predictionstep is executed entirely before the next stepsbegin.

3.1.6 Inverse Quantization

During encoding, the matrix of each macroblockhas been multiplied with a quantization matrix,

10

Page 11: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.1 Decoder Model

which, more than all other steps, makes thecompression lossy. The quantization is reversedbefore decoding proceeds by multiplying with aninverse quantization matrix, in which every ma-trix element is the reciprocal of the correspond-ing element of the quantization matrix. The in-verse quantization matrix can be a default oneor it can be embedded into the bitstream. Dif-ferent matrices are used for different frame typesor for different types of macro blocks.

Skipping the coefficient prediction, the inversescan and inverse quantization steps may be im-plemented together as a loop with just tablelookups and a multiplication. However, becausethe different versions of inverse scan and inversequantization can be selected independently ofeach other, they must be represented as twofunction blocks.

3.1.7 Inverse Block Transform

The macroblock matrix we dealt with in the pre-vious steps was not necessarily a matrix in thespatial domain. So the rows and columns of thematrix are not directly mapped to the image’sx and y coordinates. The algorithms usuallychoose a different domain, in which operationslike the quantization and coefficient predictionare fairly simple matrix operations and fit thevisual perception model used by the algorithmin their effect on the final image. A commondomain is the frequency domain, which I willdescribe for the MPEG-1/2 algorithm in Subsec-tion 3.2.1. It has the nice property that quan-tization in the frequency domain will graduallysmooth out details in the spatial domain.

This decoder step now transforms the mac-roblock matrix into the spatial domain, whichinvolves complicated mathematics, sometimeseven with floating point calculations. The re-sulting spatial matrix corresponds to a portionof the final image and has the same dimensionsas the macroblock matrix.

3.1.8 Spatial Prediction

The spatial and temporal prediction steps de-scribed now use previously decoded data of ei-ther the same frame (spatial prediction) or adifferent frame (temporal prediction) to recon-struct or merely guess the part of the imagebeing covered by the macroblock currently de-coded. This step can potentially be executed atthe same time as the steps 4 (inverse scan) to 7(inverse block transform), but I will not pursuethis parallelity, because the commonly availabledecoder implementations are single threaded.The outcome of the inverse block transform isthen added to or otherwise merged with thisreconstruction to reduce the residual error be-tween the prediction and the image to be en-coded.

3.1.9 Temporal Prediction

Today’s decoder algorithms benefit tremen-dously from temporal redundancy in the video.The most basic idea is to not always store entirevideo images, but to store the differences to theprevious image. This has been extended overdecoder generations to allow other reference im-ages than just the previous one and to compen-sate for motion by storing translation vectorswith the macroblocks. It even includes rotatingand warping parts of the reference image, com-pressing the motion vectors by applying predic-tion to them or weighted interpolation betweenareas of multiple reference images in the mostsophisticated algorithms.

Because the temporal prediction introducesinter-frame dependencies, it cannot be used onevery frame to its full extent, because the al-gorithms need to consider that playback mightstart at an arbitrary position inside the stream,for which one would require a frame withoutany dependencies on other frames, or that aplayer might want to skip the decoding of up-coming frames when short on CPU time, whichrequires that not too many other frames depend

11

Page 12: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

on the ones about to be dropped. The distinc-tion which features of the temporal predictionare allowed is usually made with frame typesagain: For specific frame types, using futureframes as reference images is allowed. Otherframe types may only use images from the pastup to a certain depth. The most restricted frametype is required to be self-contained, no refer-ence frames can be used.

The merging of the results of prediction and in-verse transform ends the per-macroblock loop.Execution will continue with the decompres-sion of the next macroblock. It should benoted that most algorithms bundle consecutivelystored macroblocks with common properties ina so called “slice”. This bundling is not yet rel-evant for considerations about execution time,but it will become more interesting when algo-rithms start to associate a semantic with slices,for example marking slices containing more im-portant parts of the image like objects in theforeground. Determining a per-slice decodingtime will be interesting, because it would allowslice based scheduling. A player would be able toskip decoding of unimportant parts of the imagewhen not enough CPU time is left. Such partsof the image would then stay unchanged com-pared to the previously displayed image. Moredetails on this can be found in the discussion onH.264 in Subsection 3.2.3.

3.1.10 Post Processing

Post processing applies a set of filters to the re-sulting image so that compression artifacts arereduced and the perceived image quality is en-hanced. However, with today’s algorithms, postprocessing is often optional, so in realtime ap-plications, a video decoder can skip the postprocessing stage if the scheduled CPU time isexceeded. Therefore, I will not analyze the ex-ecution time of the post processing stage andexamine the mandatory parts of the decoding.With future algorithms, post processing will be-come an integral part of the decoding process. Itmay use hints from earlier decoding stages or im-

prove the image so significantly that it becomesunusable without post processing. For those al-gorithms, researching the execution time of postprocessing will become necessary.

3.2 Current Decoders

I will now give a brief overview on how the al-gorithms this article examines fit into the de-veloped decoder model. To widen the perspec-tive and to further demonstrate the broadnessof the model, I will also show, how the modelaccommodates two additional algorithms thathave been touched in Subsection 1.2, but forthe reasons explained there, those algorithms’execution times will not be examined.

3.2.1 MPEG-1/2

The MPEG-1 [6] and MPEG-2 [7] algorithmsdistinguish between three frame types: I- (in-traframes), P- (predicted or interframes) andB-frames (bidirectional predicted frames). Theframe type affects the selected execution pathof the inverse quantization and temporal pre-diction function blocks. Here are the functionblocks explained, as depicted in Figure 3:

1. The MPEG video elementary bitstream ispacketized for easy delivery. The packetsand headers are byte-aligned to allow resyn-chronization in case of bitstream errors,which may be caused by network outages.The various fields in the headers, however,are not byte-aligned, which makes parsingmore difficult, but reduces the overall sizeby increasing the information density of thestream.

2. The decoder preparation is almost emptyapart from acquiring the memory for thetarget image. Because the memory buffersneeded for decoding do not change in size,there is no need for allocation or dealloca-tion unless the video resolution changes.

12

Page 13: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.2 Current Decoders

3. The decompression stage is a combinedHuffman/RLE algorithm, which determinespatterns of nonzero coefficients and zeroruns in the macroblock values. Thenonzero values are stored together with anindicator for the pattern of zeros. Both arecompressed according to a Huffman table.

4. The inverse scan can use two different zig-zag patterns to reorder the macroblock val-ues in the 2D matrix. The pattern is se-lected by a flag in the bitstream.

5. Coefficient prediction is only done by stor-ing the difference of consecutive DC coef-ficients (see Step 7) instead of their actualvalues.

6. The inverse quantization multiplies themacroblock matrix elementwise with an in-verse quantization matrix. Different matri-ces are available for the I-type and P/B-type frames and the matrices can also beembedded into the bitstream.

7. The heart of MPEG-1 and MPEG-2 is theIDCT, the inverse discrete cosine trans-form. This block-transforms the mac-roblock from the frequency domain intothe spatial domain. The frequency domainmatrix stores coefficients for a decompo-sition of the spatial domain into 2D co-sine contributions. The 64 2D cosine func-tions depicted in Figure 4 form a base ofthe spatial domain. The left upper valueis the constant offset, whose base func-tion has a zero frequency in both direc-tions, called “DC coefficient”. The remain-ing values with a frequency other than zeroin at least one direction are called “AC co-efficients”.2 The transform takes those co-efficients to scale the associated 2D cosinefunction. All these contributions will thenbe superimposed to form the final spatialdomain matrix. We can see that the lower

2I believe this to stem from the electronics world, whereDC and AC stand for direct current and alternatingcurrent.

+

v = 0

u = 0 u = 1 u = 2 u = 3 u = 4 u = 5 u = 6 u = 7

v = 1

v = 2

v = 3

v = 4

v = 5

v = 6

v = 7

Figure 4: Base functions of the discrete cosinetransform [23, 7].

frequencies are located at the upper leftcorner and that they increase toward thelower right corner. While the discrete co-sine transform and its inverse are losslessexcept for rounding errors, evaluations haveshown that most images have dominant lowfrequency contributions and only little in-fluence of high frequencies, so especiallyafter quantization, a significant number ofvalues in the frequency domain matrix willbecome zero during encoding. As for exe-cution times, the IDCT is computationallyexpensive, but fast integer implementationswith SIMD properties exist. By vectoriz-ing loops, these implementations are ableto process several values in parallel, so theymake use of instruction sets like the IntelMMX instructions to speed up the calcula-tion.

8. There is no spatial prediction in MPEG-1/2.

9. The temporal prediction is only active forP- and B-type images. For I-type frames,it is empty. A P-frame stores the differ-ences to the previous I- or P-frame, a B-frame stores the differences to the previ-

13

Page 14: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

ous and the next I- or P-frame, whereasevery macroblock can reference the previ-ous, next or an average of both frames. B-frames are never used as reference frames,so the decoder has to keep only two refer-ence frames at any time: the backward andthe forward reference. Additionally, everymacroblock can encompass a motion vec-tor, which moves a certain tile of the refer-ence image before applying the stored dif-ferences. This vector can operate with fullor half-pixel granularity.

10. There is no post processing in MPEG-1/2.

The main difference between MPEG-1 andMPEG-2 is the support for interlaced materialin MPEG-2 and the scalability extensions to al-low decoding with a reduced resolution baselayer video, complemented by enhancement lay-ers. Other than that, only minor changes likeextended resolution range and the addition ofnew chroma subsampling formats occurred.

3.2.2 MPEG-4 part 2

The MPEG-4 part 2 algorithm adds a new frametype to the common I-, P- and B-frames. TheS-frame is a sprite frame, which uses a technol-ogy called global motion compensation (GMC)to apply an overall motion to an entire refer-ence frame. The frame types and macroblocktypes are once again the deciding factors for theexecution path used to decode a frame.

1. The bitstream is densely packed, so thesyntax elements in it are not necessarilybyte aligned, which makes parsing difficult.The stream can contain a complexity es-timation header, which stores informationwe could use for our metrics extraction,like DCT block counts. Unfortunately thisheader is optional and the common encoderimplementations do not use it, so existingvideos would have to be reencoded withan extended encoder to benefit from thisheader, therefore I will not pursue this.

2. The P-, B-, and S-frames use a specialtreatment of the edges of the referenceframes for motion vectors beyond the frameboundaries. This preparation step is com-puted before the reference frames are used.

3. The bitstream uses several tables for vari-able length coding of the coefficients andthe motion vectors.

4. Different scan tables are available, whichalso includes scans suitable for interlacedmaterial.

5. The coefficient prediction treats both theDC and the AC coefficients by copying rowsfrom macroblocks above and columns frommacroblocks left of the current position.Because rows represent vertical shapes andcolumns horizontal shapes, this will extrap-olate existing image elements into the cur-rent macroblock. The stored coefficientsfrom the bitstream are merely offsets thatare added to the predicted ones.

6. The inverse quantization uses a quantiza-tion matrix to scale the coefficients. Adap-tive quantization by changing the matrixwithin one frame is possible.

7. The block transform is the already knownIDCT (see Subsection 3.2.1).

8. There is no spatial prediction in MPEG-4 part 2.

9. The temporal prediction for P- and B-frames is quite similar to the one in MPEG-2: P-frames can use up to one reference im-age block, selected by a motion vector, B-frames can use up to two references and in-terpolate between them. The motion vec-tor precision has been doubled compared toMPEG-2 and allows for quarter pixel accu-racy. One 16×16 macroblock can be di-vided into four 8×8 subblocks, each usingan individual motion vector. The new S-frame uses one reference and shifts or warpsthe entire image using global motion vec-tors. The result of the temporal prediction

14

Page 15: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.2 Current Decoders

is then enhanced by adding the result of theIDCT to reduce the prediction error.

10. MPEG-4 part 2 contains an optional postprocessing step with different quality levels.It can reduce the block artifacts that canbe seen on the macroblock boundaries andit can reduce ringing introduced by overlystrong high frequency contributions in im-ages with sharp edges.

The MPEG-4 part 2 standard has been extendedto include support for nonrectangular video bystoring a 1-bit alpha channel and like MPEG-2 itallows scalable decoding with a reduced resolu-tion base layer and enhancement layers. Thestandard has been further polluted with con-cepts alien to traditional video compression, likeFBA objects for face and body animation withsynthesized lipsync mouth movements and fullyscalable 3D meshes with compressed textures.All these features have never received a wide ac-ceptance, but made the standard fairly big anddifficult to implement.

Profiles are used to limit the amount of featuresa decoder has to implement, so I will name thetwo most important ones [11]:

Simple Visual Profile is designed for lowpower and low latency encoding and de-coding, so it is suited for video communi-cation and mobile applications. It is similarto H.263. The decoder must only supportI- and P-frames and half pixel motion vec-tor accuracy.

Advanced Simple Profile (ASP) targetsmaximum coding efficiency for desktopand broadcast video. The decoder mustsupport I-, P-, B- and S-frames with globalmotion compensation and quarter pixelaccuracy.

I will not consider any of the MPEG-4 part 2features outside the advanced simple profile, be-cause they are not commonly used and there areno working open-source decoder or encoder im-plementations available.

3.2.3 H.264

H.264 is the most advanced of the codecs de-scribed here and for reasons stated before, it isnot considered for the actual decoding time pre-diction. However, it is still interesting to see thatH.264 fits into our decoder model to demon-strate its generality.

1. The H.264 bitstream is divided into twolayers: The network abstraction layer andthe video coding layer. The latter containsthe actual video stream, whereas the for-mer stores sideband information for net-work streaming. This includes data par-titioning techniques and redundant slices,which allows a decoder to tolerate or hidenetwork packet loss to a certain degree.Streaming applications have been takeninto account throughout the entire designof H.264 as we will see later. But becausethe format also targets the high-end broad-cast market, it also includes features miss-ing in previous standards, like frame num-bering and auxiliary pictures, which can beused for alpha channels.

2. The decoder preparation can require con-siderable resources, especially memory, be-cause unlike the typical two referenceframes in previous standards, H.264 can useup to 32 reference frames.

3. The bitstream uses a context-adaptivevariable-length coding for the macroblockcoefficients with coding tables changingdynamically based on bitstream statistics.The side information and syntax elementsof the bitstream are compressed with acontext-adaptive binary arithmetic coding.The following decoding steps are controlledby the type information, which can beone of the usual I-, P- or B-type or oneof the new SI- or SP-type. But unlikeolder algorithms, H.264 assigns the typeat the slice level, not at the frame level.To accommodate network streamed con-tent with multiple datarate versions of the

15

Page 16: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

same video, the switching intra (SI) andswitching predictive (SP) slice types al-low the decoder to switch to a differentdatarate without having to worry aboutconsistent reference information. In addi-tion to that, macroblocks and slices withina frame can be ordered arbitrarily to al-low regions of equal visual importance tobe transferred together. With the supportof slice priorities and visual utilization in-formation, future decoders can scale theirperformance by dynamically omitting lessimportant parts of the image or decodingthem in a lower quality.

4. The inverse scan reads the coefficients intothe 16×16 sized macroblock matrix. Themacroblocks are divided into 4×4 sizedsub-blocks.

5. There is no coefficient prediction in H.264.It has been superseded by a complete spa-tial prediction, see Step 8.

6. The inverse quantization tables are fixed inH.264 and designed to have several quanti-zation levels that correlate linearly to per-ceived visual quality.

7. The block transform is not the IDCT, butan integer transform with similar properties(see Figure 5). It operates on 4×4 blocksof coefficients and is an exact-match calcu-lation that can be implemented using onlyshift and add operations on 16 bit integers,thus preventing rounding errors in both theinverse and forward operation. This allowsH.264 to operate completely lossless, whenthe quantization step is skipped.

8. A spatial prediction is done on I-slices,which extrapolates image content from theedges of neighboring macroblocks using dif-ferent prediction methods.

9. For P- and B-slices, temporal predictionuses motion compensation to reuse imagedata of previously decoded frames. Theblock granularity can reach from 4×4 up

+

v = 0

u = 0 u = 1 u = 2 u = 3

v = 1

v = 2

v = 3

Figure 5: Base functions of the H.264 integertransform [9]

to 16×16 blocks and the motion vectorsuse quarter pixel precision. While previousstandards did not allow B-frames as refer-ence, H.264 does not have such a limita-tion. The large number of up to 32 refer-ence frames can be leveraged to do long-term predictions. P-type slices select one ofthose frames as the reference, B-type slicescan use up to two of them with an arbi-trarily weighted interpolation, allowing forefficient coding of crossfades. To limit theinter-frame dependencies, selected framesare tagged as IDR-frames and function asdependency boundaries. These frames onlycontain I-slices and therefore do not dependon other frames. Temporal prediction mustnever exceed the frame interval betweenthe previous and the next IDR-frame.

10. H.264 uses a mandatory deblocking filterthat has to be applied before the final im-age is kept as a reference frame. Thismeans the deblocking cannot be skippedbecause this would result in slightly wrongreference frames, introducing a drift be-tween the original and the decoded imagesby an increasing residual error.

16

Page 17: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.3 Metrics for Decoding Time

Despite its advancedness, I have shown thatH.264 fits well into the proposed decoder model,so future application of the presented methodsto predict H.264 frame decoding times shouldbe possible.

3.2.4 DV

I only include a discussion of the DV standard toshow that the decoder model can also accommo-date lower complexity decoder algorithms. TheDV format has not been developed for efficientcompression in the sense of a high quality perbyte ratio, but it is optimized for easy per-frameaccess to simplify video editing. As such, it doesnot exploit any temporal redundancy, but drawsits compression effect from a lossy exploitationof spatial redundancy.

1. The DV bitstream is designed for ran-dom access. It does not consider networkstreaming and is of low overhead.

2. With respect to frames, the DV decoder isstateless, so no reference frame space needsto be prepared.

3. The DV format uses a JPEG-like compres-sion for the individual frames. A variablelength coding is used on the coefficients.

4. The inverse scan is a typical zig-zag scan.

5. There is no coefficient prediction.

6. Quantization is done by scaling the coeffi-cients.

7. The block transform is the IDCT.

8. There is no spatial prediction.

9. There is no temporal prediction.

10. There is no post processing.

3.3 Metrics for Decoding Time

Using the XviD [21] implementation of theMPEG-4 part 2 algorithm and a selection ofexample videos ranging from highly compressedlow resolution video up to high definition con-tent, I measured the per-frame execution timespent inside the various function blocks on a400 MHz Pentium II machine. The examplevideos listed in Table 1 naturally span a vari-ety of stream parameters, which is necessary toevaluate their influence on the decoding time.If, for example, all videos had been of the sameresolution, the influence of the pixel count ondecoding time could not be seen.

The profiling facility included in the XviD imple-mentation has been a good starting point for alogical splitting of the decoder to wrap the rightfunctions for time measurement. It is interest-ing to see in Figure 6 how much time is spent ineach function block for XviD on average. (Thisaverage has been taken over all sample videosfrom Table 1.) I will now go through the variousfunction blocks of the decoder model and ex-plain how their decoding time can be estimatedusing values derivable from the stream.

3.3.1 Bitstream Parsing

Because the bitstream is not always byte alignedand parts of it may be compressed, the parsingtakes time. In my XviD measurements, the bit-stream parsing also encompasses all the parsinginside the macroblocks and the glue code bind-ing the function blocks together. Because mostof this should take place at the macroblock level,we can expect the execution time for this stepto be closely related to the amount of pixelsper frame, because the amount of macroblocksto parse correlates well with the amount of pix-els. This works well for I-frames as can be seenin Figure 7. (The samples appear in clustersbecause the pixel count is constant throughouteach video.)

However, the P-, B- and S-frames show a differ-ent behavior. I chose the B-frames to demon-

17

Page 18: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

video name duration resolution size profile properties

◦ Voyager 1:20 min 352×288 13 MB SP low in movement, noisy image (TV recording)

� IBM commercial 1:50 min 352×240 2.8 MB SP low in movement, many monochrome shapes

� IBM commercial 1:50 min 352×240 508 KB ASP see previous, highly compressed

+ movie trailer 1:13 min 720×576 63 MB SP dynamic, fast cuts, DVD quality material

+ movie trailer 1:13 min 720×576 8.6 MB ASP dynamic, fast cuts

× Amazon documentary 1:37 min 1440×1080 97 MB ASP high quality HD content, detailed images

profile names: SP=simple visual profile, ASP=advanced simple profileThe first column shows the symbol with which this stream is represented in the diagrams throughout this section.

Table 1: The sample videos used in the decoding time analysis.

16.3% bitstream parsing

10.1% decoder preparation

9.9% decompression & inverse scan

3.5% inverse quantization

1.6% coefficient prediction

6.3% inverse block transform

50.7% temporal prediction

1.6% merging

Figure 6: Average execution time profile for MPEG-4 part 2.

strate this in Figure 8, the P- and S-framesbehave similar. Because these frame types al-low a greater variety of coding options for eachmacroblock than the I-frames, which naturallyconsist entirely of intracoded macroblocks, theamount of bytes per frame has an influence, too.As can be seen in Figure 9, the result improveswhen taking this into account by calculating alinear combination of pixel count and bytes perframe that matches the execution time best.This idea also works for P- and S-frames.

I conclude that the pixel count as well as theamount of bytes per frame need to be includedin the metrics to predict decoding time.

3.3.2 Decoder Preparation

The MPEG-4 part 2 algorithm extends the edgesof reference frames before they are used fortemporal prediction. The XviD implementationdoes this once for the entire reference frame be-fore the macroblocks are evaluated. Because thelength of the image edges correlates with thesquare root of the pixel count, I would expect asquare root match between the pixel count andthe execution time for this step. Figure 10 showsexemplary for P-frames that this assumption iscorrect.

The B- and S-frames behave the same and forI-frames, there are no reference frames to bemodified. In earlier tests I experienced straysamples on the abscissae with S-frames, but thisturned out to be a problem in XviD: The MPEG-

18

Page 19: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.3 Metrics for Decoding Time

.

××××××××××××××××

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

400 000 800 000 1 200 000 1 600 000

18

24

0

6

12

Pixel count

Mill

ion

CPU

cyc

les

Figure 7: Estimating the bitstream parsing timefor I-frames.

.

×××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×

×

××

××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××

×

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×

××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

1 200 000 1 600 000

18

24

0

6

12

Mill

ion

CPU

cyc

les

400 000 800 000

Pixel count

Figure 8: Estimating the bitstream parsing timefor B-frames (1).

4 part 2 standard allows for entire frames to benot coded, so they use up almost no CPU time.This could be regarded as a separate frame type(N-frame), which is what I am doing in my im-plementation of the prediction. XviD was erro-neously reporting these frames as S-frames.

3.3.3 Decompression and Inverse Scan

The decompression should be the dominant taskhere, because the inverse scan is as easy as se-lecting a scan table and then using one tablelookup for every decompressed coefficient. The

.

×××××××××××××××

××× ×× × ×××× ××× ×××× ×

× ×× ×× ×× ××××

×××××××××××××× ××× ×× × ××× ×× ×× ×××× ××××

×× ×× ××× ×××× ×××

×××××× ××× ××× ××××× ××× ××× ×××× ×××××

×××××××× ×× ××× ×××××

××× ××× ×× ×× ××× ××× ××××× ×××

××××× ×

×××××××××××××××××××××××××××××

×

×

××

×××××× ×××× ×××××

××××××××××××××××××××××××××

×××

×××× ×××× ××××××××××××××××××××××××××××××××××

×××

×

×××××× ×× ××××××××××× ××× ×××× ×× ×

×× ××× ××××× ××× ×× ×× ×× ××× ××××× ×××× ×××× ××× ×× ×××××××

×××

×

×××××××

××××××××××××××

×

××× ×××

×××

×××

××××

×××××××

×××

××

××

×××

× ××××

×× ×××

×××××

×××××

××××

××××

×××××

××××××××× ×××××

××

×××××××××××

××

× ×××××××

××

×××

××××××

××

×××

××× ××

×××××××

××××

××××××××

××

×××

××

××

××

××

×××

×× ×××

××××

××××× ××× ××

××××

××××××

××

×××××× ××××××× ××

××× ×××××× ×××××

××××

×××× ××

××××

×× ××× × ××××

×× ××× ××

×××

×××

××

××

××

××

×××

××

×××

×××××××× ×

×××

× ×××× ×××××

××

××××××××××××××

× ×××××××××××

×××××××××××××××××××××× ×

×××××××××× ×××××

××××××××××××××××××

×

×××××××××× ×

××××××××××××××××××

××××××××××××××

××××××××××××××××××××××

××××××××××××××

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××××××××××××××××××

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××× ×××××××××××××××××××××××××× ××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××

×××××××

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

1 800 000 2 700 000

18

24

0

6

12

Pixel count + 25 * Byte count

Mill

ion

CPU

cyc

les

900 000 3 600 000

Figure 9: Estimating the bitstream parsing timefor B-frames (2).

.

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+++++++++++

+

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

400 000 800 000 1 200 000 1 600 000

1.5

2

0

0.5

1

Pixel count

Mill

ion

CPU

cyc

les

Figure 10: Estimating the decoder preparationtime for P-frames.

decompression however is a variable length de-compression, so the syntax elements are notbyte aligned. The execution time correlateswell across all frame types with the per-framelength of the bitstream, because most of thebitstream is spent on storing the macroblocks,header overhead is minimal. Figure 11 showsthe match.

3.3.4 Inverse Quantization

The inverse quantization is done once for everymacroblock, so it correlates well with the to-

19

Page 20: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

.

××× ×

×××

× ×

×

××

×

×××

×

××××

×

×

×××

×××××××

×

××××××

×

×××

×××××××

×××××

×

××

××

××××

×

×××

××××××××

×

××

×××

××

×

×××××××××××

×

×× ××××××××

×

×××××××××

×

××

×××××××××

×

××

×××××××

×

××××××

×

××××××××××××

×

×××××

××

××

×

××××

×

××

××

×

×××

×××××

×

××

×××

×××××××

×

××××××××

×

××

×××××

×

×××××××

×

××

××××

×

×××

××××

×

×××

××××

×

×××

×××

×

××××

×

×××

×

×××

××

××××

×××××××

×××

××××

×

× ××

××

×

××

×

×

×

××××

×

××××××

×

××

×××

×

××××

×

×××××

×

××××

×

×

×

××

×

×××

×

××××××

×

×××

×

×××××

×

×××

××

××

×

×××

××

×××××

××

×××××

×

×

××

×××

×

××××

×

× ××

×

×

××

×

×××

×

×××

×

××××

×

××××

×

×××××

×

××

××

×

××

×

×

××××

×

××××

×

××××

×

××

×

×

×××

×

×××××

×

×××

×

×

××

×

×××××

×

××××

×

×××××

×

××××

×

××

××

×

×××

××

××××

×

××

××××××

×××

×

×××

××

×

××

××

××××××

×

×××

×

××

×

×

××

×

××

×

××

×

× ×

××

×××

×

××

×

×

×

××

×

××

×

×

×

××

×

×

×

××

×

××

××

××××

×

××

××

×××

××

×

××

×

×

××

×

×

×

××

×

××

××

×××

×

×

×

××

×

××

×××

×

××

×

×

×

×××

×

××

×

×

×

×××

×

×

×

×××

×

××

×

×××

×

××

×

××

×

×××

×

×××

×

××

×

×××

××

×

××

×

××××××××××

×××

××××××

×

×××××

×

×××

×××

×

×× ×××

×××

×××××

×××××

××××

×××

×××

×××

×××××××

××××

×

××××

×

×××

××

×

××××××××××

××××××××

××××××

×××××××××

××××××××

××× ××××××

×××

××××××××××

×

×××

×

××××

×××××××××××××××

××××××

××××××××××××

××××××× ×××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××

×

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××

××××××××× ×××××××××

××××××× ×

××××××××××××××××××××××××××××××××××

××××××××××

×××

×××××××××××××××××

×××××××

×××××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××××××××××××××××××××××

××××××××××××××××××××

×××××××××××××××

××××××××××××××××××××××××××××××××××××

×××× ×× ××××××× ××××××

×××××××××××××

×××××××××××××××××××××××

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××

×

××××

××

×

××

××

××××××

×

××

×××

×

×××

××××××

×

××××××

×

×××

×××

×

×××××

××××

×××

×

××××××

×

× ××

×

××××

××××××

×××××

××××

××××××

×××××××

×××××

×××× ×××××××

×××××

×××××

×××××××

××××

×××××

××

×

××××××××

×

×××××

×

××××××

×××

×

×

×××××

×

×××××××××

×

×

×

××

×

××××××

×

×××××

×

×××××

×××××××××

++

+++ +++

++++

+++

+++ ++++ +

+++

+ ++ + ++

++

++++++

++ + ++ +

++

+++

+ ++

+++

+++

++++++++

+++++++++++++++ ++++++++++++++++++ ++++++++++++++++++ +++++++++++

+++++++++++++++++

+++ ++++++ ++++++++++++

+++++++++ +

+++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++

+++++++++

+++++++++++++++++++++++

++++++++++++++++++++++++++++++++++

++++++++ +++

+++

+++++++++

+++

++++++

++++

++++++++++++++++++++++ +++++++++++++

+++++++++++++++++++++++++++++++++++++++ +++++++

+++++++++++

++++++

+++++++++++ ++++++

+++++++++++++++++++++++ +++ +++++++++

++++ ++++

++++

++++++++++++

++++++++++

++++++++++++

++++++++

+++ +++++++

++++++

++++++++++++

++++ +

+++

+++

++ ++++++ ++

++++++++++++

+++ +++

++ +++ ++ +++ +++ +++++++++++++++ +++ +++++++++++++++++++

+++ +++

+++ ++++

+++++++ +++

++++++

++++++

++++++++++++

+++++++++++

++++++++++++

+++++++++++++++++++++++++++++++++ ++++++++++++ +

++++++++++++

++++++++++++

++++++++++++

+++++++++++

+++++++++++

++++++++++++

++++++++++++

++++ +++ +++++++++++++++

+++++++++++

++++++++++++ +

+++++++

++++

+++

+++++++++

++++++++++++

++++++++++ ++++++++++++

++++++++

++++

+++++++++++

++++++

+++

++++

++++++++++

+++++++++++++++++++++++

+++++++++++ ++++++++++++

+++++++++++++

++++++++++++

++++++++++ ++++++++++++ ++++++++++++ +++ +

++++++++

++++++++++

++++++++++++

+++ ++++++++

+++++++++++

+++++++++++++

+++++++++++

++++++++++++

+++++++++++++++++++++++

++++++++++++++++++++++

+++ ++++++++++++++++++++

++ +++++++++ +++++++++++++++++++++++++ +++++++++ ++++++++++++ +++++++++ ++

++++++++++++

+++++++++

++++

++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++

+++++

++++++++++++++++++++

+++++ ++++++++++++++++++++++++

+++ +++ +++ +++ +++ +++ ++++++

+++++++++++ +++++++

+++++++++++++++++++++

+

++++++

+++

++

+

+++++++++++++++++++++++++++++

++++++++++++

++

++++++++ ++++++ +++ +++

+++++++++++

+++ +++++++++

+++++

+++

++++++ +

+++++++

++++++++++++

++++++++++++

++++++++ +++

+++

+++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++

+++++++++++++++++++ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

80 000 160 000 240 000 320 000

39

52

0

13

26

Byte count

Mill

ion

CPU

cyc

les

Figure 11: Estimating the decompression timefor I-, P-, B- and S-frames.

.

××××××××××××××××

×

××××

××

×××

××× ××××

×

××××××

×

× ××

××××××

×

×××××

×

××

××

×××××

×××

××××××××

×

××

××

×

××

×

××××××

×××××

×

×××× ×

××

×××

×

××

×××××××

×

××

×××××××××

×

××

×××××××

×

××

×××××

×

×××

× ×

×××

××

×

×

× ×

×

××

××

×××

××× ××

×

×

××

××

×

××

×××

×

×

××

× ×

×

×××××××

×

×××××××

××

××

×××××

×

××

×××××

×

××

×××××

×××

××××

×

×××

××××

×××

×× ××

×

××××

×

×××

××

××

×

×

×

×××

×

××× ×××××

××

×××

×

××

×××

××

×

×

×

×

××××

×

× ××××××

××

×××

×

××××

×

×× ×××

×

××××

×

×

×

××

×

× ××

××

××

××××

×× ×

×

× ×××××

×× ×××

××

×

×××

××

××

×××

××××××

××

×

××

××××

×××××

× ××××

×

××

×××

×

×××

×

× ×××

×

××××

×

×××××

×

××

××

×

××

× ×

××××

×

××××

×

× ×××

×

××

××

×××

×

××××××

×××

×

×

×××

×××××

×

××××

×

× ××××

×

××××

×

××

××

×

× ××

×

×

××××

×

××

××××××

××××

×××

×××

×××

×

×××××××

×××

×

×

×

×

×

××

×

××

×

××

×

××

××

×× ×

×

××

×

×

×

× ×

×

××

×

×

×

× ×

×

×

×

××

×

××

××

××××

×

××

××

×××

××

×

××

×

×

××

×

×

×

××

×

× ×

××

×

×

×

×

×

×

×

×

×

××

×××

×

××

×

×

×

×××

×

××

×

×

×

××

× ×

×

×

×××

×

××

×

×

××

×

××

×

××

×

×× ×

××

××

×

××

×

××

×

××

×

× ×

×

×× ×

×

×××××

×

×

×

××

×××××

×

×

×

×××

×

××

×

×××

×

××

××

×

××

×

××

××

×

××××

×

××

×

×

××

×

××

×

×

××

××

×××

×××

×

××

×

××××

×

××

×

××

×

××××× ××××

×

×

×

××

×

×××

×

××

×

××

×

×

×

××××× ×

×

××

×

×× ××

×

××

× ×××××

×

××

×

×××××

×××

×

×

×××

×

××

××

×××××××××××××××

×××××× ××××××× ×××× ×

× ×× ×× ×× ××××

×××××××××××××× ××× ×× ×××× ×× ×× ×××× ××××

×× ×× ××××××× ××

××××××× ××× ××

×××××× ××× ××× ×××××××××

×××××××× ×× ×××××××× ××××

×××× ×× ××××××××××× ××× ××××× ×××××××××××××××××××××××××××××× ×

×××

×

××××××××××××××

×××××××××××××××××× ××××××× ×××

××××× ×××××××××××××××××××××××××××××××××××××× ××× ×

×××××× ×× ××××××××××× ××× ×××××× ×

××××××××××××× ×× ×× ×× ×××××××× ××××

×××× ××× ××××××××× ×

××××××× ×× ×××××××××××××× ××

××× ×× ×

× ×××

×××

××××××

×××× ×××

×××

×××

×× ×××

××× ×

×××

××××××××× ×××××

× ×××

×××××

×× ×××××× ××××××

× ×××××××

×××××

×××××

××××× ×

× ××××××× ××

××××× ××

×××

××××× ×× ××

×××× ××××

××

×× ×× ××

××××

××

××

×

×× ××××××× ×××××× ×

× ×××

××××

××××××

× ×××××

× ×××××× × ××× ×× ×××××× ××

××× ×××× ×

××××

××

××××× ××× × ××× ×

××××× ×× ××

× ×××

××

× ××

××

××××

×× ×× ×

×× ×××××× ×× ××

×××××××××× ×× ×××××××××××××××××××××××××××××××××××××××××××××××× ×× ×××× ××××××××× ××××××××××× ××××××××× ×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××× ×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××××××

×

××××

×

××

×

××

×

×

×

× ×××

×××××××

×

×××

×××××××

× ××××

××××

××××

×××××

××××

××××

×× ×

××× ××

××××

××

××

××××

××

×

×× ××

×

×××

×

× ××××

×

××× ×××

×

×× ××

×

××

×

×××××× ×

×

× ××

××

××××

×

××××××

×

×××××××

×

××

×

××××××

××

×

×××××

×

×× ×××

×

××

×

×

×

××××

×

×××××

×××

××

×

×

×

××

×

×× ××××

×

×

×

×××

×

× ×× ××

××××× ×

×

××

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+++++ + ++++++

+++++++++++++++++++++ +++++++++++ ++ +++++++++++++++++++++++++ +++++ +++++ +

+++ ++++++ +++++++++++++++++++++++++++++++++++++

++++++++++ ++++++++++++

++ ++++++ +++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++ ++++++++++

+++ +++

++++++++++++

++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++ +++++++++++

++++++++++++ +++++++++++ +++++++++++++

++++++++ ++ +++++++++++++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++ +++++++++

++++++++++++ ++++++++++++ +++++++++++++++++++++++

+++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++ +++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++ ++++++++++++++ +++++++++++ +++++++++++++++++++++++ +++++++++++

++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ +

+ +++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ +

+++++++++++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++

+++++++++++++++++++++

++++++++++++++++++ +++++++++++++++++++++++

++++++++++++ ++ +++++++++ ++++++++++++ +++++++++++ ++++++++++++ ++++++++++++++++++++

++

++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++

++++

+++ +++

+++

+++++++++++++++++++++

+++++++++++++++++++++++++++

+++++++++++++ +++++++++++++++++++++++++++++++++++++++ +++ ++ ++++++++++++++++++++++++++++++++++++++++++++ ++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++

+++++++++++++

++++++++++++++++

+

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++ +

+ +++++++++

+++++++++++++++

+ + ++

+++++++++++

+++++++++

++++ +

+++++ +++++++++++++++++++++

++++++++++++++++++ ++++++

++++++++++++++++ +++ +++++++++++++ ++++++++++++

+++ + ++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++

++++ ++++++++ +++++++++++++++

++++

+++++++++++++++++++ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ

ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

9 250 18 500 27 750 37 000

7.5

10

0

Macroblock count

Mill

ion

CPU

cyc

les

5

2.5

Figure 12: Estimating the inverse quantizationtime for I-, P-, B- and S-frames.

tal macroblock count. The macroblock countshould not be mistaken for a scaled pixel count.Because of not coded macroblocks, the pixelcount only yields an upper bound for the mac-roblock count. The diagram for all frame typescan be seen in Figure 12.

3.3.5 Coefficient Prediction

The coefficient prediction is only done for in-tracoded macroblocks, so the execution time ofthis step correlates with the count of this mac-roblock type. Intracoded macroblocks can oc-

.

×××××××××

×××××××

×

××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××× ××××××××××× ×××××××××××××××××××××××××××××××× ×××××××××××××××××××××××××××××××

××××××××××××××××

×

××

××

×××××××××

××××

×××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××

×

×××××××××××××××

×××××××××××

×

×××× ××××××××××××××××××××××××××××××××××××××××××××

×××××××××

×××××××××××××

××××××××××××××××× ×

×

×××××

×××××××××××××××××××××××××××××× ××××××××××××××××××××××××××××××××××××××××××××××××××

××××××××××××××××××××××××××××××××××××

×××××××××××××××

×××××××××××××××××× ××××××××××××××××××××× ×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××× ×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××× ××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××××××××

××

××× ××××××××

××××××××××××××××××××××××××××××××××××××××××××××××××

×××

××

×××××××××××××××××××××××××××××××× ××××××××××× × ××××

××××× ××××××××××××××

××××××××××××××××××××××××××××

× ××

××××××××××××××××

×××××

××××××××××

×

××××××××××××××××××

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++

+++++++++++++++++++++++++++++++++

+ ++ ++++++++++

++

+ ++++++++++++++++

+ ++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

++++++

+

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++

+

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++

++++++ +

++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++

+

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++

+++ ++ ++++++++

++ +++++++++++++

++ +++

+++++++++++ +++++++++ ++++ +

++++++++++++++++++++++++++ +++

++++++++++++++++++++++++ ++

+++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++

+++++++++++++ +++++++++ +++++++++++++++++++++++++++++++++++ +++ +++++++++++++++

++

+

+

+++++++++++++++++++ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

9 250 18 500 27 750 37 000

18

24

0

Intracoded macroblock count

Mill

ion

CPU

cyc

les

12

6

Figure 13: Estimating the coefficient predictiontime for I-, P-, B- and S-frames.

cur within every frame type, but the estimationworks equally well for all types as shown in Fig-ure 13.

3.3.6 Inverse Block Transform

As with the inverse quantization, the total mac-roblock count gives a good estimate, as Fig-ure 14 shows. There is still deviation fromthe linear match, which might stem from ei-ther cache effects or different amounts of zerovalues in the macroblocks. Depending on theimplementation, algorithms can be faster whenmore coefficients are zero. However, lookinginto that has the disadvantage of increased pre-diction overhead, and because the inverse blocktransform only accounts for 3.9% of the totaldecoding time, I decided not to count zero val-ues separately.

3.3.7 Temporal Prediction

This step is the most time consuming. Unfortu-nately, its execution time is also the hardest topredict. Because motion compensation is onlydone for intercoded macroblocks, one might betempted to derive the execution time from thecount of intercoded macroblocks. Figure 15shows that this fails.

20

Page 21: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3.3 Metrics for Decoding Time

.

××××××××××××××××

×

×××

×

×

×

×××

×××××××

×

××××××

×

× ××

××××××

×

×××××

×

××

××

××××

×

×××

××××××××

×

××

××

×

××

×

××

××××××

×××

×

×××× ×

×××××

×

××

×××××××

×

××

×××××××××

×

××

×××××××

×

××

×××××

×

×××

××

×

××

××

×

×

××

×

××

×

××××

××× ×

×

×

×

××

×

×

×

×××

××

×

×

××

× ×

×

×××××××

×

×××××××

××

××

×××××

×

××

×××××

×

××

××××

×

×××

××××

×

×××

××××

×

×××

× ××

×

××××

×

×××

××

×

×

××

×

×××

×

××× ×××××

××

×××

×

××

×

××

××

×

×

×

×

××××

×

× ×××××

×

×

×

×××

×

××××

×

×× ×××

×

××××

×

×

×

××

×

× ××

×

××

××××

×

×××

×

× ××××

×

×××

××

××

×

×××

××

××

×××

××

×××× ××

×

××

×××

×

×××××

× ××××

×

×

×

×××

×

× ××

×

× ×××

×

× ×××

×

×××××

×

×

×

××

×

××

× ×

××××

×

××××

×

× ×××

×

×

×

×

×

×××

×

×××××

×

×××

×

×

×××

×××

××

×

××××

×

× ××××

×

××××

×

××

××

×

× ××

×

×

××××

×

××

××××××

××××

×××

×××

××××

××××××

×

×××

×

×

×

×

×

××

×

××

×

××

×

×

××

×

×× ×

×

××

×

×

×

× ×

×

××

×

×

×

× ×

×

×

×

××

×

××

××

××××

×

××

××

×××

××

×

××

×

×

××

×

×

×

××

×

× ×

××

×

×

×

×

×

×

×

×

×

××

×××

×

××

×

×

×

×××

×

××

×

×

×

××

× ×

×

×

×××

×

××

×

×

××

×

××

×

××

×

×× ×

××

××

×

××

×

××

×

××

×

× ×

×

×× ×

×

×××××

×

×

×

×

×

×××××

×

×

×

×××

×

××

×

×××

×

×

×

××

×

××

×

×

×

××

×

××××

×

×

×

×

×

××

×

××

×

×

××

××

×××

×××

×

××

×

××××

×

××

×

××

×

××××× ××××

×

×

×

××

×

×××

×

××

×

××

×

×

×

××××× ×

×

××

×

×××

×

×

××

× ×××××

×

××

×

××××× ××

×

×

×

×××

×

×

×

××

×××××××××××××××

×××××× ×××××××

××××

××

×× ×× ×××××

××××××××××××××× ××

× ×× ×××××

××× ××××

××××

×× ××

××××××× ×××

××××××××× ××

×××××× ×××

××× ××××××××××

××××××× ×× ×××××××× ××××

×××× ×× ××××××××××

××××

××××

××

××××××××××××××××××

×××××××××

×× ×

×

×××

×××××××××××××

×××××××××××××××××××

××××××××

×××

××××××××

××××××××××××××××××××××××××××××××××

××× ×

×××××××× ×××××××××××××× ×××××× ×

××××××××××××××

× ×× ×× ×××××××× ××××

×××××××

×××××××××

×××××××

××

××

××××

××××××××

××

×

×××

××

××

××

×××

××××

×××

×××××

×××

××

×××

×

××××

×××

×××

×××××

×××××

××××

×

×××

×××××

××× ××××××

×××××

××

×××××

××××××

××

××××

×××××

×

×××

×××××

××

××××

×××

×

××

×××××

××

×××

××××××××

××

×××

××

××

×

××

×××

×

×××

×××

×××××

×××××

×××

××××

××××××

×

××××

××× ×××××

× ×××

×××

××××××

×××××

××××

××××

××

××××

××

××××

×××

×××××× ×

×××

×××

××

×

××

×

××

×××

××

×××

×××

××××××

××

×××××××××××× ××

××××××

×××××××××××××××××××××××××××××××××××××××××× ×

×××

××××××××××× ×××

××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××××

××××××××××××××××××

×××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×

×××××

×

××××

×

××

××

×

×

×

×

× ×××

×××××××

×

×××

×××××××

× ××××

× ×××

×

××××

×××××

×××

×

×××

×

××

××× ××

××××

××××

×××

×

××

×

×× ××

×

×××

×

× ××××

×

××× ×××

×

×× ××

×

××

×

×××××× ×

×

× ××

×

×

××××

×

××××××

×

×××××××

×

××

×

×××××

×

××

×

×××××

×

×××××

×

××

×

×

×

××××

×

×××××

××

×

××

×

×

×

××

×

×× ××××

×

×

×

×××

×

× ×× ×

×

××××× ×

×

××

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+++++ + ++++

+++++++++++++++++++++++ +++++++++++ ++ +++++++++++++++++++++++++ +++++ +++++ +

+++ ++++++ +++++++++++++++++++++++++++++++++++++

++++++++++ ++++++++++++++ ++++++ ++++

+++++++++++++++++++++++

++++++++++++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++++++++++++++++++++++++++

++ +++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++ +++++++++++++ ++

++++++++++++

+++

++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++ +++++++++++++

++++++++ ++ +++++++++++++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++ +

+ ++++++++++

+++++++++++ ++++++++++++ +++++++++++++++++++++++

+++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++ +++++++++++ ++++++++++++

++ +++++++++ ++++++++++++

+++++++++++++

+++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++

+++++++++++++++++++++++ +++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ +

+ +++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ +

+++++++++++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++

+++++++++++++++++++++

++++++++++++++++++ +++++++++++++++++++++++

++++++++++++ ++

+++++++++ ++++++++++++ +++++++++++ ++++++++++++ ++++++++++++++++++++

++

++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++

++++

+++ +++ +

+++

++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++ +++++++

++++++++++++++++++++++++++++++++ +++ ++ +

+++++ ++++++++++++++++++++++++++++++++++++++ ++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++

++++++++++++++ +++++++++++++

++++++++++++++++

+

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+

+++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++ +

+ +++++++++

+++++++++++++++

+ + ++

++

++++++++++

+++++++++

+++ ++++++ +++++++++++++++

++++++ ++++++++++++++++++ ++++++

++++++++++++++++ +++ +++++++++++++ ++++++++++++

+++ + ++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++

++++ ++++++++ +++++++++++++++

++++

+++++++++++++++++++ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

9 250 18 500 27 750 37 000

12

16

0

Macroblock count

Mill

ion

CPU

cyc

les

8

4

Figure 14: Estimating the inverse block trans-form time for I-, P-, B- and S-frames.

×××× ××

××

××

××

×

××

× ×××

×××× ×××× ×××××× ××

××××

××× ×× ×××× ×××

× ××××××××

××× ×× ××× ××

××××× ×

×

×

×

××

×

×

×××

×

×

×××

××× ×

××××

×× ××× ×××

×××××× ××× ×××

××××

×××

××××

××

××

×× ×× ××

××× ×

× ××

×

×

×××× ×××

×× ×

×× ×

× ××

×

××

×× ×

××

×××××

××××

×

××

×××××

××× × ××

××××

×××

×× ××××× ××× ××××

×××× ×××××××

×××

××

×××

×

×

×

×

×××

××

××××

××××

×

×× ×××× ×××××× ×× × ×

××

× ×××× ×

× ×

×

× ××××××× ×× ×× × ×

× ××× ××× ×× ××× ××× ××× ×× ××× ×× ××× ×× ×× ××

×

× ×××××× ×

×

× ×××× ××× × ××× × ××

×× ×××× ××× ×××× ××

×

× ××××

×× ××× ×× ×× × × ××× × ×××× ×× × × ××× ×× ××××× ××× ××× ××× ×× ×× × ×

××××× ×

×××× ×× ××× ××

× × ×××× ×××××× ×××× ××

×××××× ×× ×

× ××× ××××××××××× ××× ×× ×× ××

× ××××× ××× ×××××

×

××× ×× ××

×× ××× ××

××

××

××

××

××

×××

××

×× ××

××

×× ×× × ×××× ×

××

××

× ×× × ××× ×× ×× × ×× ×× × ××× ×××××

× ××× ××× ×× ××× ×

×× ××× ×× ××× ×× × ×××× × ×× ××

×

× ×××× × ×× × ×× ×××× ××× ×× ××× × ×× ××× × ××

××

××

××

×

×

×

×

×

×

× ×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

× ×

×

×

×

×

×××

×

×

×

××

×

×

×

×

×

××

××

×

×

×

×

××

××

××

××

××

×

×

×

×

×

×××××

××× ×

×

×

××

×× ××

×××

×

×

×× ×

××× ××××××× ××

×× ×× ×××××× ××××

××× ××× ××× ×××× ×× ××× ××××

××

××

×× ×

××× ××

××

××××× ××××

×

××

×× ××

× ×××

×× ××× ××

+ ++++++++++ ++++ + ++ ++ ++ ++++++++++++ ++++++++++++ +++++++ ++++++ ++ +++++++++++++++++ +++++ +++ ++++++ ++++++++++++++++++++++++++++++++++++ +++++++++++ +++++++++++ +++ ++++++ +++ +++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++++++ ++++++++++++++++++++++++++++++++++++++++ ++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++ +++++++++++++ ++ ++++++++++++ +++ ++++++++ +++++++++ ++ ++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++ ++ ++++++++++++ +++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++ ++++++ +++ ++++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++ ++++++++++++ ++++++++++++ +++++++++++ +++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++ ++++++++++++++ +++++++++++ +++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++ +++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++ ++++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++ +++++++++++ ++++++++++++ ++ +++ +++++ ++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ +++++++++++++++++++++++++++++ +++++++ ++++++++++ +++++++++++++++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++

+++++++++++++++++++++ +++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++

+++++++++

++++++ +++++ ++ +++++++ ++

++++++++++++++++++++++++++++++

++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

+++++

+++++++++++++++

+++++

ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

8 000 16 000 24 000 32 000

90

120

0

Intercoded macroblock count

Mill

ion

CPU

cyc

les

60

30

Figure 15: Estimating the temporal predictiontime for P-frames (1).

The reason is that there are various differentmethods of motion compensation due to mac-roblock subblocking and different storage typesfor motion vectors. These options are indepen-dent of the macroblock type. Searching for amore adequate metric, I first tried to distinguishbetween all these methods and account for themindividually, but this strategy is problematic, be-cause there are too many combinations and cov-ering them all with sample videos is difficult.But when looking at the bigger picture, I real-ized that motion compensation is basically justcopying pixels from a reference image into the

××××××

××

××

××

×

××

×× ××

××××××

××× ×× ×× ×× ×

×××× ×××××××××××

××××××

×××××××××××××

××

××

×××

×

×

×

××

×

×

×××

×

×

×××

××××

××××

×××× ××××

×× ×××××××× ××

×××××

×××

××××

××

×

×× ××××

××××

×××

×

×

××××××××××

×××

×××

×

××

×××

××

×××××

××× ×

×

××

××

××

××

××× ×××

××××××

×××××

×××× ×××××

×××××× ×××× ×

××

××

××

× ××

×

×

×

××

××

×××

××

××××

×

×××× ××××××××××××

××

××××××

××

×

×××× × ×××××××××

××××××××× ××××××××××××××××××

×××××××

×

××××××××

×

×××××××××××××××

×××××× ×××××××××

×

×××××

××××××××××× ×××× ×××××××××××××× ××

×× ×××××××××××× ×××××××××× ××××××××××

×× ×× ×××××××××××

××××

××××××× ×

× ××××××××××××××××××××××××××

××××××××××××

× ×××××××

×× ×××

××

××

××

×

××

××

×× ×

××

× ×××

××

×××××××××××

×

××

××××××××××××××××××××××× ×

×××××××××

×××××××××××× ×

××××××××××××××

×

××××××××××××× ×××××× ××××× ××××××××

××

××

××

×

×

×

×

×

×

××

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

×

××

×

×

×

×

×××

×

×

×

××

×

×

×

×

×

××

××

×

×

×

×

××

××

××

××

××

×

×

×

×

×

××××

××

××××

×

××

××××

×× ×

×

×

×××

× ×××××××× ×× ×

××× ×××××××× × ××

×× ××××× ××× ×××× ×××××××××

××

××

×××

×× ××

××

×××××××××

×

××

××××

××××××××

× ××

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +

+ +++++++++++ +++

+++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++

+++++++

++ +++++++ +++++++++++++++++++++++++++++ ++++++++++ +++++

+++

+++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++ +++++++++++ + +++++++++++++++++++ +++++++ +++++++++++++++++++++++++++++++++++++++++

++++++++

++++++++++

++++++++++++++

++++++

ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

8 000 000 16 000 000 24 000 000 32 000 000

90

120

0

Mill

ion

CPU

cyc

les

60

30

Reference frame memory accesses

Figure 16: Estimating the temporal predictiontime for P-frames (2).

current frame. Because of half pixel or quarterpixel accuracy and the necessary interpolationand filtering, one pixel copy operation can rangefrom 1 to 20 memory accesses. Therefore, thenumber of memory accesses into the referenceframe should result in a far better prediction. Ofcourse this number cannot be measured directly,but looking at the code for motion compensa-tion, we can easily count the memory accesses.So depending on the lower bits of the motionvectors, which differentiate between full, half orquarter pixel references, I created a formula tocalculate the number of memory accesses. Forthat, the motion vectors need to be decodedcompletely, which takes time and increases pre-diction overhead. However, because the tempo-ral prediction accounts for a big portion of theoverall decoding time, I think this step is nec-essary. I will show the effect on the overheadin Subsection 6.1.1. The promising results forP-frames can be seen in Figure 16. This worksequally well for B-frames.

It may be a bit surprising that memory accessesalone estimate the execution time so well, giventhat different interpolation and filtering is donefor full, half and quarter pixel accesses. I as-sume that with today’s processors, these addi-tional operations are covered up by the memoryaccesses because of parallel execution.

21

Page 22: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

3 Decoder Analysis

.

××××××××××

×××

×

×

××× ×

××× ×××

× ××××××××××××××××××××××

×××××××××××××× ×××××××××

×××× ×××××

×

×

××

××

××

××

×

×

×

×

×

×

××

×

×

×

×

×

×

×× ×× ×

××

×

××

×

×××× ××

××

××××××

×××

×××××

×××××××××××××

××××××××× ×××××××××××××××××

×××

××

×××××× ××××××

××××××

×××

××××××××××

×

×

×

×

×

×

×

×

×

×

× ×××

××

××

×

+++++++ ++ ++++++++

+ ++++++++++++

+ +++++ +++++ +++++

+++ ++ +

++++++++++++++++++++++

+++++++++++++++++++++

++ +++++ +++++++++++++++++++++++

++ +++

+++++++++++++++ +++++++++++++++++

++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++++++ ++++++++++++ ++++++++++++++++++ +++

+++++++++++++++ ++++++++++ ++

++

+

+

+++++++++++++++++++◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

8 000 000 16 000 000 24 000 000 32 000 000

90

120

0

Mill

ion

CPU

cyc

les

60

30

Reference frame memory accesses + 5056 * GMC operations

Figure 17: Estimating the temporal predictiontime for S-frames.

S-frames, however, encompass global motioncompensation (GMC), so a different approachis required here. Because I did not want to lookinto the more complicated warping algorithm, Ijust counted the number of macroblocks usingglobal motion compensation and calculated alinear combination of the memory accesses fromnon-GMC macroblocks and the amount of GMCmacroblocks to match the execution time. Thegood results shown in Figure 17 indicate thatthe execution time per GMC macroblock is fairlyconstant.

3.3.8 Merging

The merging step combines the results of thetemporal prediction with the decoded mac-roblocks, so a linear match of intra- and inter-coded macroblock counts should estimate theexecution time well enough. Because this stephas only a small influence on the total decod-ing time, we can tolerate the deviations seen inFigure 18, which are possibly caused by cache ef-fects or saturation artifacts. (The saturation ofthe 16 bit integers used throughout the decod-ing process to 8 bit integers in the final imagealso takes place here.)

××××

××

×××

×××××××

×

××××××

×

× ××

××××××

××××××

×

××

××

×××××

×××

××××××××

×

××

××

×

××

×

××

××××

×××××

×

×××× ×

×××××

×

××

×××××××

×

××

×××××××××

×

××

×××××××

×

××

×××××

×

×××××

×××

××

×

×

× ××

××

××

×××××× × ×

××

××

××

×

××

××

××

×

××

× ×

×

×××××××

×

×××××××

××

××

×××××

×

×××××××

×

××

×××××

×××

××××

×

×××

××××

×

×××

× ×××

××××

×

×××

××

×

×

×

××

××××

××× ××× ×××

××××

×

×××

××××

××

×

××

××××

× ××××××

××

×××

×

×××××

××××××

××××

×

×

×

××

×

××××

××

× ××××

×× ×

×

××××××

×× ××

×× ×

×

×××

××

××

×××××

× ××××

××

××

××××

×××××

×××

××

×

××

×××

×

×××

×

××××

×

× ××××

×××××

×

××

×××

××× ×

××××

×

×××××

× ××××

××

××

×××

×

××× ×××

××××

×

×××

×××

××

×

×××××

× ×××××

××××

×

××

×××

× ××

××

××××

×

××

××××××

××××

××××× ×

×××

×

××××××

×

×××

×

×

×

×

×

××

×

× ×

×

××

××

××

××××

×

××

×

×

×

× ×

×

××

×

×

×

× ×

×

×

×

××

×

××

××

××××

×

××

××

× ××

××

×

××

×

×

××

×

×

×

××

×

× ×

××

×

×

×

×

×

×

×

×

×

××

×××

×

× ×

×

×

×

×××

×

××

×

×

×

××

××

×

×

×× ×

×

× ×

×

×

××

×

××

×

××

×

×××

××

××

×

××

×

××

×

××

×

× ×

×

×× ××

×××××

×

×

×

××

××××××

×

×

××××

××

×

×××

×

××

××

×

××

×

××

××

×

××××

×

××

×

×

××

×

××

×

×

××

××

×××

×××

×

××

×

××××

×

××

×

××

×

××××× ××××

×

×

×

××

×

×××

×

××

×

××

×

×

×

××××× ×

×

××

×

×××

×

×

××

× ××××××

××

×

××××× ×××

×

×

×××

×

××

××

×××××××××××××××

××××××××××××× ××××

××

××××××

×××

×××××××××××××××××× ×× ×××××

× ×××××××

××××

××× ××××××××××

×××××× ××× ×××××

×××××× ×××××××××××××

××××××××××××××××××××××××××× ×××××××××××

××× ××××× ×

××××××××××××××××××

×××××××××

×××

×

×× ×

×××××××××××××

×××××××××

××××××××××××××××××

×××××××

××××

××××××××××××××××××××××××

××××××

××××××× ×

×××××××××××××××××××××× ×××××××

××××××××××××××

××× ××××××××××××××

×××× ××× ×××××××××

×××××××

××

××

××××

××××××××

××

×

×××

×××

×××

×

×××

××××

×××××××

×××

×××

×××

× ××××

×××

×××

××××××

××××

×××

×

×××

×××××

×××

××××××

×××××

××

×××××

××××××

×

××

×××

×××××

×

×××

×××××

×

××

×××

×××

×

××

×××××

××

×××

×××××

×××

××

×××

××

××

×

××

×××

×

××

×××

××××

×××××

××

×××

××××

××××××

×

××××

×××

×××××××

××

×××××

××××××

××××××

××

××××

×

××××

××

×××

×××××

×××××××

×××

×××

××

××

×

××

×××

××

××× ×××

××××××

××

×××××××

××××××

×××

××××××××××××

××××××××

××××××××××××××

×××××××××××××

×××

××××××××××××××

××××××××××××××

××××××××××××××

×××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

××××××××××××××××××

×××××××××××××××××××

×××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××××

×××××××

××××

×

××

××

×

×

×

×

× ×××

× ×××××××

×××

×××××××

× ××××

××××

××××

×××××

××××

××××

××

××××

× ×××

××××

×××××

×

××

×

×× ×××

××××

× ×××××

××× ×××

×

×× ××

×

× ×

×

×××××× ×

×× ×

×

× ××× ××

×××××××

×××××

×××

×

××

×

××××××

××

×

×××××

×

×× ×××

×

××

×

×

×

××××

×

×××××

×××

×× ×

××

××

×

××××××

×

×

×

×××

×××× ×

×

××××× ××

××

+++

+++ + ++++++

+++++++++++++++++++++++++++++++++

+ +++++++++++++ ++ ++++++++++ +++++

++ +++

+

+++++++++++++++++++++++++++++++++++++++++++++ +++++++++++ +++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ +++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++ +++++++++++++++++++++++++++++++ ++++++++++

+++ ++ ++++++++++++ +++ ++++++++ ++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++ ++++++++++++ +++++++++++ +++++++++++++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++ ++ +++++++++ ++++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++ +++++++++++ +++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++ ++++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++ ++++++++++++++ +++++++++++ +++++++++++++++++++++++ +++++++++++ ++++++++++++ +++++++++++++++++++++++ +++++++++++++++++++++++ +++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++ ++++++++++++ +++

+ +++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++

+++++++++++++++++++++

+++++++

++++++

+++++ +++++++++++++++++++++++ ++++++++++++ ++ +++++++++ ++++++++++++ +++++++++++ ++++++++++++ ++++++++++++++++++

+++

+

+

+++++++++++ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++

+++++

++

++

+++++++++++++++++++++

++++++++++++++++++++ +++++++

+++++++++++++ ++++++++++++++++++++++++++++++

++++++++++++ ++++++++

++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++

++++++++

+++++++++

+

+++++++++++++++

+++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

+++++++++++++++++

++++

++++++ +++

++

++++++++++++++

++ +

++

++++++++++ +++++++++

++++ +

+++++ +++++++++++++++++++++

++++++++++++++++++ ++++++

+++ +++++++++++++

++++++++++++++++ +++++

+++++++++++++++++++++++++++++++++++

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++

+++

+

+++++++++++++++++++ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ

ΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟΟ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊ ◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊◊

10 500 21 000 31 500 42 000

3.3

4.4

0

Mill

ion

CPU

cyc

les

2.2

1.1

1.39 * intracoded macroblocks + intercoded macroblocks

Figure 18: Estimating the merging time for P-,B- and S-frames.

3.3.9 Summary

I have now established which metrics are use-ful to get reasonable estimations for the execu-tion time of the various stages. Because thetime required for the entire decoding process isthe sum of the separate execution times, thesemetrics will also allow predictions for the framedecoding times. The metrics are:

• pixel count

• square root of pixel count

• byte count

• intracoded macroblock count

• intercoded macroblock count

• motion compensation reference framememory accesses

• global motion compensation macroblockcount

With similar considerations as presented forMPEG-4 part 2, I also selected the necessarymetrics for MPEG-1/2, which I will name here:

• pixel count

22

Page 23: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

• byte count

• intracoded macroblock count

• intercoded macroblocks without IDCTshortcut

• intercoded macroblocks with IDCT short-cut

• motion compensation reference framememory accesses

The only notable difference to the MPEG-4 part 2 metrics is the differentiation of theintercoded macroblocks. The libmpeg2 imple-mentation of the IDCT uses a shortcut for mac-roblocks where all AC coefficients are zero. Thissituation is easy to detect during macroblock de-compression, so the overhead introduced by thisstep is negligible.

How to extract the metrics from the stream willbe covered in Section 5. This leaves the prob-lem of automatically determining the factors toapply to the metrics to get a decoding time pre-diction. This will be discussed in the followingsection.

4 Numerical Background

I have now extracted a set of metric values foreach frame of the video. In a learning stage,of which I will present details in Section 5, Iwill therefore receive a metric vector and themeasured frame decoding time for each frame.Accumulating all the metric vectors as rows ofa metric matrix M and collecting the frame de-coding times in a column vector t, I now want toderive a column vector of coefficients x, whichwill, given any metric row vector m, yield a pre-dicted frame decoding time m x.

I will first summarize the used variables:

m the number of metric vectors accumu-lated during learning

n the number of metrics extracted perframe

M m × n matrix of accumulated metricvectors

mi the i’th row of M

t m-dimensional column vector holdingthe frame decoding times

ti the i’th component of t

x n-dimensional column vector holdingthe prediction coefficients

Because the prediction coefficients x must bederived from M and t alone, I model the situa-tion as a linear least square problem (LLSP):

‖Mx− t‖2e → min

x

That means the accumulated error between theprediction Mx and the actual frame decodingtimes t must be minimal. The error is expressedby the square of the Euclidean norm, which isthe sum of the squared error per row as we caneasily see:

‖Mx− t‖2e = (Mx− t)T (Mx− t)

=m∑

i=1

(mi x− ti)2

In general, the error will not disappear, becausethis would be equivalent to solving the linearsystem (M, t):

‖Mx− t‖2e = 0 iff Mx = t

But solving this equation is not possible in gen-eral, because we are examining a large numberof frames, thus m > n. Thus, the linear equa-tion Mx = t is overdetermined and the solutionx need not exist.

For the original linear least square problem, twocommon solving methods exist: The normalequation and the QR decomposition [24, 25].

23

Page 24: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

4 Numerical Background

The normal equation derives x by solving the lin-ear equation MTMx = MTt. I will not explainwhy this solves the original problem, because Iam not going to pursue this solution. The nor-mal equations have benefits for large m, becausethe matrix MTM is only of size n×n, and if Mcan be assumed to have full rank, there are fastsolvers like Cholesky factorization [26], but nor-mal equations have the disadvantage to be nu-merically unstable [27]. In comparison, the QRdecomposition is more insensitive against badlyconditioned matrices M , and because the cal-culation of the coefficients x is not time critical,we value a stable result higher than calculationspeed.

However, we can see from the normal equa-tion that, given a full rank matrix M , the ma-trix MTM is symmetric and positive definite, sothere is exactly one solution x. The full rankproperty will be dealt with later, and the QRdecomposition algorithm we are going to use de-tects matrices with rank deficiencies as we willsee in Subsection 4.2.

4.1 QR Decomposition

The solution of the linear least square problem‖Mx− t‖2

e → min discussed previously is calcu-lated by a QR decomposition of M . This trans-formation creates two matrices Q and R, forwhich M = QR holds, whereby the matriceshave these additional properties:

• Q is a m×m real matrix,

• R is a m× n real matrix,

• Q is orthogonal, so Q−1 = QT ,

• R is in the form

R =(

R0

)}n}m− n

with R being a real n× n upper triangularreal matrix, the remaining rows of R arefilled with zeros.

• If M has full rank, R is regular, becausethe diagonal elements rii are not zero.

Using these properties of Q and R, we can re-formulate the original problem:

‖Mx− t‖2e = ‖QRx− t‖2

e

=∥∥QRx−QQ−1t

∥∥2

e

=∥∥QRx−QQTt

∥∥2

e

=∥∥Q

(Rx−QTt

)∥∥2

e

=(Q

(Rx−QTt

))T(Q

(Rx−QTt

))=

(Rx−QTt

)TQTQ

(Rx−QTt

)=

(Rx−QTt

)TQ−1Q

(Rx−QTt

)=

(Rx−QTt

)T(Rx−QTt

)=

∥∥Rx−QTt∥∥2

e

=∥∥∥∥(

R0

)x−

(ccR

)∥∥∥∥2

e

=∥∥∥∥(

Rx0

)−

(ccR

)∥∥∥∥2

e

=∥∥∥∥(

Rx− c0− cR

)∥∥∥∥2

e

=∥∥∥Rx− c

∥∥∥2

e+

∥∥−cR

∥∥2

e

=∥∥∥Rx− c

∥∥∥2

e+

∥∥cR

∥∥2

e

QTt has been divided similar to R:

c := QTt =(

ccR

)}n}m− n

Because only∥∥∥Rx− c

∥∥∥2

edepends on x and

squares can never be negative, we can reach oursolution by

‖Mx− t‖2e =

∥∥∥Rx− c∥∥∥2

e+

∥∥cR

∥∥2

e→ min

x

iff

Rx− c = 0

24

Page 25: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

4.2 Householder’s Algorithm

iff

Rx = c

This equation is easy to solve, because R is up-per triangular and always gives a result if M hasfull rank, because then, R is regular.

The remaining error can be expressed as

minx‖Mx− t‖2

e =∥∥cR

∥∥2

e

The calculation of R and c (and thus of R andc) can be unified into one step:

R = QTM

c = QTt

iff(R | c) = QT (M | t)

So we can construct the algorithm to reach thefinal result:

1. Add the vector t as an additional columnto the right of matrix M .

2. Use a QR decomposition algorithm todecompose (M | t), resulting in (R | c),which is of dimension m× (n + 1).

3. Take the upper n rows of R and c, result-ing in R and c, which have the necessaryproperties, because the upper n+1 rows of(R | c) are upper triangular, so the uppern rows of R are upper triangular too.

4. Solve the staggered system Rx = c to gainthe solution x.

The following interesting facts should be noted:

• As a result of the QR decomposition, thelower m− (n + 1) rows of (R | c) are filledwith zeros, which makes every element of cbut the first n + 1 disappear. Because the

lower m−n elements of c make up for cR,every element in cR but the first is zero. Sothe remaining error easily calculates as

minx‖Mx− t‖2

e =∥∥cR

∥∥2

e= c2

R1= c2

n+1

(1)with cR1

being the first element of cR andcn+1 being element n + 1 of c. This erroris called the residual sum of squares.

• The subdiagonal elements and the zero-filled rows of (R | c) do not need to betouched again, so implementations can beoptimized by not actually zero-filling them.

• The value of Q is never used explicitly, onlyR is needed. So the QR decomposition al-gorithm can benefit from that by not actu-ally calculating Q.

4.2 Householder’s Algorithm

Several other algorithms for the QR decomposi-tion exist, like the Gram-Schmidt [28] or Givens[29] algorithms, but Householder [28] is themost commonly used, because of its numericstability and simplicity. It is also suitable for thegiven task, because it directly calculates R, butnot Q. The principle of Householder’s algorithmis iteratively zeroing the subdiagonal elementsof the initial matrix A(1). This transformationis done on a per column basis. In Householderiteration i, the columns of A(i) are treated inthis fashion:

1. Given a matrix A(i) of dimension ki × li(with ki > li and li > 0), we divide the

first column a(i)1 of A(i) like this:

a(i)1 =

(pi

q(i)

)}1}ki − 1

and we construct an orthogonal matrixQ(i), so that

Q(i)a(i)1 =

(σi

0

)}1}ki − 1

25

Page 26: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

4 Numerical Background

2. Householder’s solution is:

σi = −sgn (pi) ∗√

p2i +

∥∥q(i)∥∥2

e

= −sgn (pi) ∗√

p2i + q(i)Tq(i)

with sgn (p) ={

1 for p ≥ 0−1 for p < 0

v(i) =(

pi − σi

q(i)

)Q(i) = I +

1σi (pi − σi)

v(i)v(i)T

by showing Q(i)Q(i)T = I, Q(i) can beproven orthogonal. (I is an identity ma-trix of a fitting size.) But because Q(i) isnot needed explicitly to proceed, it does nothave to be calculated.

3. The vector Q(i)a(i)1 =

(σi

0

)replaces the

first column of A(i).

4. The remaining columns a(i)j (j = 2, . . . , li)

of A(i) are each replaced with

Q(i)a(i)j = a

(i)j +

v(i)Ta(i)j

σi (pi − σi)v(i)

5. Take everything but the first row and thefirst column of A(i) and call this A(i+1).Set ki+1 = ki − 1 and li+1 = li − 1. Ifli+1 > 0 (that is: A(i+1) has at least onecolumn), repeat at step 1, otherwise thealgorithm has finished.

The final result R is gained by unnesting theA(i), as if the transformation of A(i+1) had been

done inside A(i):

R =

σ1 a(1)2 a

(1)3 a

(1)4 · · · a

(1)l1

0 σ2 a(2)2 a

(2)3 · · · a

(2)l2

0 0 σ3 a(3)2 · · · a

(3)l3

0 0 0 σ4 · · · a(4)l4

......

......

. . ....

0 0 0 0 · · · σl1

0 0 0 0 · · · 0...

......

.... . .

...0 0 0 0 · · · 0

with a

(i)j being the first element of column vec-

tor a(i)j .

I am not going to show how Q can be derivedfrom the Q(i) and why Q is orthogonal. Thiscan be found in literature [28].

It can be seen that a subproblem A of A :=A(1), created by deleting columns from the righthand side of A, will have a solution R, which canbe derived from the solution R to A by delet-ing the same amount of columns from the righthand side of R. The reason is that the individ-ual iterations work from left to right throughthe matrix, so every column in the result isnot influenced by columns further to the right.Therefore, an entire Householder QR decompo-sition will solve not only the problem A, but alsoall subproblems A. As we have seen in Equa-tion 1 on the preceding page, the residual sum ofsquares of a m×n linear least square problem isthe square of the matrix element rn,n in R. Thisvalue can be found as the final element in the σ-diagonal. The same applies to the subproblems,so all σi are residual sums of squares of theirrelated subproblems. Whenever any of thesevalues becomes zero, we know there is a sub-problem inside A with a residual error of zero.This means there is a set of coefficients, withwhich one column can be described exactly as alinear combination of other columns, so there isa linear dependency in A and A does not havefull rank. Because every combination of columnsis checked during an entire QR decomposition,

26

Page 27: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

4.3 Refining the Result by Column Dropping

every matrix without full rank will show up as adisappearing σ value. Using this information todrop one of the dependent columns as will bediscussed in Subsection 4.3, the algorithm canensure its own prerequisites, because a full rankis required for obtaining a solution.

The numerical stability of the Householder al-gorithm can be further improved by using col-umn pivoting: At the beginning of each iterationi, select the column with the largest euclideannorm and swap it with the leading column. Ofcourse this will give permutated results.

For our initial problem, the Householder algo-rithm starts with A(1) = (M | t) and k1 = m,l1 = n.

4.3 Refining the Result by ColumnDropping

To ease implementations of prediction for newdecoder algorithms, we want to simplify the pro-cess of metrics selection for the decoding timeestimation. In general, it should be possible tofeed the LLSP solver with sensible metrics and itshould figure out, which ones to use and whichones to drop by itself. Of course, the best re-sult for the linear least square problem is alwaysachieved by using as many metrics as possible,but one of the design goals was to make the re-sults transferable to other videos, which mightnot always work when using metrics too greedily.

I will first examine why results might not betransferable. The primary reason for this aresimilarities of columns inside the metric matrixwith linear combinations of other columns. Thespecial case of this situation is an actual lineardependency. This will lead to ambiguities in theresulting coefficients, such that we can increasecertain coefficients and compensate by decreas-ing others with little or no influence on the pre-diction results. The barebone LLSP solver willalways search for the optimum, which mighthave such unwanted properties as negative coef-

ficients or coefficients being otherwise way off.3

Those coefficients will of course still obtain goodresults for the video they have been calculatedfrom, but the results for other videos may becompletely wrong. The goals to reduce such ef-fects must therefore be:

• Eliminate any negative coefficients, be-cause they clearly indicate something go-ing wrong. Our decoder model is a chainof function blocks with varying executiontimes summing up to a total executiontime. But no such function block can havea negative execution time.

• Eliminate any further linear dependenciesor situations close to a linear dependencythat make the coefficients unstable.

The strategy we are going to use to make theresult better transferable is iteratively droppingcolumns until all coefficients are positive andinter-column dependencies have been reduced.This leads to a reduction in the columns whilestill keeping enough columns to maintain thequality of the prediction.

As we have seen in Equation 1 on page 25, theremaining error, called residual sum of squares,for an n-column matrix is the square of the valuein the n’th column of the n’th row after the QRdecomposition. This value indicates the qualityof the prediction. The smaller, the better. If wehave to drop columns for transferability, we wantto do so without too much degradation on thequality of the result. Therefore, we iterativelydrop columns and then choose the one that bestfits our two goals, but results in the smallestincrease of this error indicator.

3Example:

x1 + x2 = 2x1 + 1.0001x2 = 2.0001

leads to x1 = x2 = 1, but if the right hand sideof the second equation had been 1.9999 due toa measurement error, the result suddenly becomesx1 = 3, x2 = −1, because of the badly conditionedsystem, which almost has a linear dependency in it.

27

Page 28: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

4 Numerical Background

A linear dependency or a situation close to it canalso be detected with this indicator: If we dropa column and there is only a minor increase inthe residual sum of squares, the dropped columnhad little to no influence on the result, so thecolumn can be approximated as a linear combi-nation of others:

Given columns c1 to cn+1 and an error vector e

with ‖e‖2e = eT e =

∥∥cR

∥∥2

e, we have this solution

to the LLSP:

x1c1 +x2c2 + · · ·+xici + · · ·+xncn = cn+1 +e(2)

If we drop column cn, we get a new solution byQR decomposing the remaining columns:

x′1c1+x′2c2+· · ·+x′ici+· · ·+x′n−1cn−1 = cn+1+e′

(3)

By subtracting Equation 2 from Equation 3, wecan express cn:(

x′1 − x1

)c1 + · · ·+

(x′n−1 − xn−1

)cn−1 =

xncn +(e′ − e

)x′1 − x1

xnc1 + · · ·+

x′n−1 − xn−1

xncn−1+

1xn

(e− e′

)= cn

If the residual sum of squares is small andchanges only marginally, ‖e‖2

e ≈ ‖e′‖2e holds.∥∥e− e′

∥∥ =∥∥e +

(−e′

)∥∥≤ ‖e‖+

∥∥−e′∥∥

= ‖e‖+∥∥e′

∥∥≈ 2 ‖e‖

Thus, if ‖e‖2e has been sufficiently small,∥∥∥ 1

xn(e− e′)

∥∥∥2

eis small and the column cn can

be approximated using the other columns, re-sulting in a badly conditioned system.

I propose this algorithm to eliminate such situ-ations:

1. Solve the linear least square problem.

2. Count the negative coefficients in the re-sult.

3. Try dropping every column but the final oneand compute the individual reduced linearleast square problems. It is obvious that thefinal column must not be dropped, becauseit holds the measured decoding times, thetarget of our entire prediction.If no columns can be dropped, becausethe problem does not have enough columnsleft, we terminate and output the set ofcolumns dropped so far.

4. For all solutions of the reduced problems,count the negative coefficients in the re-sults.

5. Select a reduced problem based on the fol-lowing criteria:

a) Among the problems with fewer neg-ative coefficients than the originalunreduced one, select the one withthe fewest negative coefficients. Ifseveral problems have an equally lowamount of negative coefficients, selectthe one with the smallest increase onthe error indicator.Go to step 6, if a problem has beenchosen.

b) Among the problems with a resid-ual sum of squares less than 0.01 %(explanation follows) higher than theresidual sum of the original unreducedproblem, select the one with the low-est increase4 on the error.Go to step 6, if a problem has beenchosen.

c) If no problem has been selected yet,the algorithm terminates outputting

4Although the residual sum of squares should never de-crease when dropping a column, this can happen dueto rounding errors. So a decrease should be treatedlike a negative and thus small increase here.

28

Page 29: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

the cumulative set of columns thathave been dropped earlier.

6. Remember the column that has beendropped to form the selected reduced prob-lem. Restart the algorithm at step 1 withthe selected reduced problem.

The 0.01 % tolerance used has been chosen em-pirically. Higher values will obviously make thealgorithm more robust against numerical insta-bilities, but they might also result in columnsbeing dropped that are significant to the result.Remember that the coefficients are only derivedfrom a small set of learning videos. Columnswith only little influence on this set may beimportant for predicting other video’s decodingtimes. I received good results with the 0.01 %.When the metrics are selected wisely, even lowervalues down to 0 should work. If one is unsureabout the metrics, for example when implement-ing support for a new algorithm, it might helpto increase this tolerance level until more refinedmetrics are available.

This algorithm always terminates by construc-tion. It is also correct according to our goals,because it will only drop a column, when thiseither reduces the amount of negative coeffi-cients or results in only a minor increase onthe residual sum of squares, in which case wehave shown that a linear approximation for thedropped column existed. These situations areexactly what we wanted to avoid. Unfortunatelythe algorithm cannot guarantee any propertieson the resulting final solution, because we canconstruct a linear least square problem that al-ways results in negative coefficients.5 However,such problems never appear within our applica-tion, because we can assume any sensible met-rics selection to contain a number of columnsthat correlate positively with the measured de-coding time.

5All columns but the final one being filled with negativevalues, the final one being positive will do. No matterwhat columns you drop, the result will always containnegative coefficients.

5 Implementation

The implementation is threefold: At the begin-ning stands the basic solver for the linear leastsquare problem with the additional column drop-ping. This solver is fed with the extracted met-rics from the video stream. All that is then in-tegrated into VERNER [19] with considerationson frame reordering. The entire programminginterface together with additional documenta-tion for each function can be found in the filepredict.h.

5.1 Metrics Accumulation and LLSPSolver

The LLSP (linear least square problem) solverand predictor supports two phases of operation:

• Learning mode, in which the solver accu-mulates metrics and frame decoding times

• Prediction mode, in which previously ob-tained coefficients are used to calculate pre-dicted frame decoding times

During learning mode, the solver col-lects metric values in a matrix by callingllsp_accumulate() repeatedly. This functiontakes the extracted metrics and the measuredframe decoding time and appends it as a newrow to an internally kept matrix. This matrixis stored in the learning_t structure as alinked list of blocks of matrix rows. Becausethe final size of the matrix is unknown, newblocks will be allocated as needed. Newlyallocated blocks will have a random size to holdbetween 1024 and 2048 rows. The random-ization is done to avoid any possible aliasingof the allocation overhead and properties ofthe stream influencing decoding time. Theallocation of a new block transiently changestiming and cache behavior, which might haveunforeseeable and unwanted influences on themeasured decoding times. If, for example,allocation always happens at an I-frame, this

29

Page 30: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

5 Implementation

influence could unfavorably alter the calculatedcoefficients for I-frames. The randomizationis hoped to level off such influences. If noallocation happens, the accumulation functionis fast and has O (n) complexity with n beingthe number of metric values to collect, so videoplayers can use it during regular playback.

If the data accumulation is finished, callingllsp_finalize() will run the actual solver.The solver implements column dropping using abitfield: Every bit of an unsigned long longvariable represents one column of the matrix.If the bit is set, the column is used, otherwisedropped. Unfortunately, this limits the possibleamount of metric columns to 64 on current ar-chitectures, but this amount is sufficient for thealgorithms I examined.

The solver iteratively calculates a QR decom-position, checks all remaining columns for dropcandidates, and then decides, whether a columnshould be dropped. If this applies, the next it-eration starts, otherwise the loop exits. A col-umn is dropped if it either results in negativecoefficients or has an influence below 0.01% onthe prediction result. For enhanced numericalstability on rank-deficient matrices, the QR de-composition uses column pivoting during the it-erations. The numerical background has beenexplained in Subsection 4.3. Because the QRdecomposition is performed with Householder’salgorithm, we require efficient column-wise ac-cess to the matrix. Therefore the matrix istransposed when copied from the accumulatedblocks, which allows for better cache use whenwalking the columns. It also makes the columnswapping needed for the pivoting easier, becausethis can be achieved by simple pointer swapping.Of course the implementation exploits that wedo not need the matrix Q. Householder cal-culates only R directly. Q would need to bederived from intermediate results by expensivematrix multiplications. We also do not fill thesubdiagonal elements of R with zeros, becausethey are never touched again.

One Householder step as well as the norm calcu-lation for the pivoting make use of a scalar prod-

uct function. Because the scalar product cansuffer from truncation errors when small con-tributions are added to an already large sum6,this function uses long double format for ad-ditional accuracy.

The entire finalization has a complexity ofO

(m ∗ n4

), with m being the number of accu-

mulated rows and n being the number of met-rics. Every householder step is O (m ∗ n) andthere are O (n) steps needed for an entire QRdecomposition. Each column is then checkedfor dropping, which accounts for O (n) QR de-compositions per dropped column. At most ncolumns can be dropped, resulting in a total ofO (m ∗ n)∗O (n)∗O (n)∗O (n) = O

(m ∗ n4

).

This may look scary, but n is typically fixed andsmall compared to m being unbound: While nis in the magnitude of 10, 40 seconds of videoat 25 frames per second already make up form = 1000. Given that the video length hasonly linear influence on execution time and thatspeed is not critical in the final calculations, thealgorithm is well suited for our needs.

The prediction mode can then calculatepredicted frame decoding times based onthe determined coordinates. The functionllsp_predict() provides such an estimate,given the extracted metrics. Additional func-tions allow loading and storing the resulting pre-diction coefficients, so that the learning stagedoes not need to be repeated all the time. Thefile format of the coefficient storage is simpleand optimized for easy access: One file containsmultiple blocks of coefficients, each of whichis able to hold the maximum number of coeffi-cients supported. The blocks can be accessedby using the identification number, which ev-ery LLSP solver gets assigned at initialization,as an index. The individual coefficients are justdumped double values. Because the format isnot too sophisticated, this should be regardedas a demonstrational feature only. A final file

6Example: with standard IEEE-754 64-bit floatingpoint (usually C’s double type), the condition`1010 + 10−10

´= 1010 holds. The 10−10 is trun-

cated.

30

Page 31: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

5.2 Metrics Extraction

format should contain versioning features andthe values should be stored in an architectureindependent, endian-safe way.

The loading, storing, finalizing, and predicting isnondestructive, so it is possible to accumulatemetrics, finalize and store the determined co-efficients and then accumulate additional met-rics, which will be appended to the existing ones.However, the functions are currently not reen-trant and it is not recommended to run thefinalization stage in a low-priority backgroundthread. This might be helpful to keep a playerapplication responsive while calculating the co-efficients, but the necessary locking is not im-plemented.

5.2 Metrics Extraction

The extraction wraps the generic LLSP solverdescribed previously in an even easier API spe-cific to video decoding. Unlike the LLSP solver,this functionality depends on the actual decoderalgorithm. Every supported decoder is repre-sented by six functions, two of which only pro-vide the usual create and destroy infrastructure.The remaining functions pass a block of com-pressed video for metrics extraction, pass a de-coding time for learning, trigger the final eval-uation of the learned data, and reset the met-rics extractor. Currently, a combined MPEG-1/2 and a MPEG-4 part 2 predictor are imple-mented. Each prediction context stores multi-ple LLSP solvers internally, one for each frametype the decoder algorithm uses. This allowsfor metrics of different frame types to be accu-mulated and evaluated simultaneously, but in-dependently.

The metrics extraction itself is done similarlyto the actual decoding. I extended an open-source decoder for the algorithm with additionalinstructions to determine the desired metricsand report them to the outside. After that,all actual decoding code has been taken out,so that only the bitstream parsing remained.This was further stripped to get a minimalis-tic“decoder”that does not do any decoding any

more, but only parses the bitstream as much asnecessary to retrieve the metrics. This code iscalled in much the same way the original de-coder has been called. Example code from theopen-source projects has been used as a start-ing point: The decoders are passed a block ofcompressed video data, which they process untilan internal decoding cycle has been completed.Then the decoder returns control to the caller,awaiting further data blocks.

For MPEG-1/2, the libmpeg2 decoder library[20] in version 0.4.0 and for MPEG-4 part 2, theXviD library [21] in version 1.0.3 have been used.Both are available under the terms of the GPL[32], so they can be combined with VERNERwithout any licensing issues.

I wanted the prediction to be as independentof the actual decoder implementation as possi-ble. However, there is one area where this couldnot be accomplished entirely: Both the predic-tor and the decoder have to stay in sync. Thatis, their stream parsers have to be in the samestate. This requires a compatible handling ofdata blocks by both. XviD always expects toreceive a full frame, which VERNER honors, sothere is no problem here. But libmpeg2 can re-ceive arbitrary chunks of data, which it will copyin an internal buffer. The predictor has to be-have the same way. The problem here is that theframes can be scattered across multiple chunksand decoding will already start after the firstchunk is being received. The predictor, how-ever, can only estimate the total decoding timeafter it received the entire frame. This leads tosituations where the decoder has already spentCPU time on decoding a good part of the framebefore a prediction is available. Several solutionscould be applied to solve this:

• A complete frame is accumulated beforedata is being passed to the decoder. Thisrequires either an additional copying of thecompressed data or modifications to libm-peg2.

• VERNER’s demultiplexers are changed sothat they always deliver complete frames

31

Page 32: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

5 Implementation

to libmpeg2. This requires the demultiplex-ers to know the concept of frames, whichshould better be encapsulated inside thedecoder.

• The prediction is modified to work with agranularity smaller than frames, for exam-ple a slice-based prediction.

As this work targets the prediction itself, I didnot implement any of these strategies, but Iwould favor the last one.

Based on the extracted metrics, the predictorcalculates a predicted frame decoding time andstores the metrics internally. Learning decodingtimes is now possible by measuring the actualtime consumed by a decoding cycle, which usu-ally directly follows the metrics extraction andprediction to benefit from caching. This mea-sured decoder cycle time is then appended tothe LLSP matrix using llsp_accumulate(),together with the previously extracted metricsstored in the predictor. If an error has occurredduring extraction, the predictor is invalid andlearning will silently fail.

The evaluation can be made at any point in thelearning process. It will finalize all the LLSPsolvers internal to the predictor and store theresulting coefficients on file. Evaluation is non-destructive, so learning can continue afterward.

The discontinuation function resets thestripped-down decoder internal to the predictor,but it does not reset the accumulated matriceswith the metrics and the measured decodingtimes. This function should be called betweendifferent video streams. It allows to learnprediction coefficients from more than onevideo, which is extremely helpful, because theinfluence of metrics like pixel resolution canonly be calculated if it is not constant.

Implementing support for a new decoder al-gorithm would simply require writing these sixfunctions for the new decoder.

5.3 Integration into VERNER

I extended the VERNER video player to acceptthree new command line arguments:

--learn to enable learning mode and pass thefile name to store newly learned coefficientsto.

--predict to enable prediction mode and passthe file name to load the coefficients for theprediction from

--silent to disable any output and synchro-nization to process the compressed videodata as fast as possible

The file names are handed over via IPC to thevcore process, where the entire learning and pre-diction will take place. Although audio decodingtime is not considered here, parts of the infras-tructure have been implemented for audio de-coding, too, to keep the audio and video partsof VERNER symmetric. The metrics extractionand prediction is done directly preceding the ac-tual decoding. The exact same data block pro-cessed in the decoding is also used for the pre-diction. The wallclock time for the predictionand decoding are derived from the timestampcounter of the CPU. The decoder cycle timeis used for learning, which directly follows thedecoding, whereas the time the prediction tookis interesting to verify the overhead introducedby the metrics extraction. We will see figures inSubsection 6.1. Optional RT Mon sensors cangive online plots of prediction error and over-head. Of course it is beneficial to use the timefor which the thread was actually running in-stead of the wallclock time to take the schedul-ing out of the equation. Therefore, I used Fi-asco’s fast userspace thread time measurementto get more accurate execution times.

The evaluation of the predictor in learning modeis done on video boundaries. Everytime a videofinishes playback and a new video starts, thecurrent predictor is evaluated, which will storethe learned coefficients. If the new video uses

32

Page 33: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

a different decoder than the previous one, thecurrent predictor is replaced with a new one,otherwise the current predictor is appended to.This limitation is because VERNER only storesone predictor for now. This could be extendedin the future to have one predictor per decoder,but it is sufficient for our purposes, because itallows to use multiple files for learning.

Unlike --silent, all the new features are op-tional and can be disabled in the VERNERcompile-time configuration. The silent mode isalways available and adds a new dropping play-mode to VERNER’s synchronization compo-nent, which just throws away all decoded videoframes instead of displaying them. For sym-metry, decoded audio samples are also thrownaway. For usability reasons, I did not want thevideo window to be displayed in silent mode, soI delayed the initialization of that window untilit is first used. Because it is never used in silentmode, it will not appear. Again, for symmetry,the same has been done for the audio driver.

5.4 Frame Reordering

One problem left to explain is the discrepancybetween the frame decoding time and the de-coder cycle time. As you may have noticed, Iwas talking about decoder cycle times in thissection, whereas previous sections dealt withframe decoding times. The problem is that onedecoder cycle might not always decode exactlyone frame. Dependencies amongst frames canmake it necessary that a frame A requires theavailability of the decoded frame B for refer-ence. Although frame A is to be displayed be-fore frame B, frame B has to be decoded beforeA. Frame B is called a forward reference. Be-cause decoders output the frames in display or-der, this can lead to one decoder cycle decodingtwo frames, first B and then A in this example.The next decoder cycle can then either decodenothing and just flush out B or keep one refer-ence in storage, outputting one frame per cyclefrom now on, if the decoder can make due withone forward reference.

This has different effects on the learning andprediction modes. In learning mode, a de-coder cycle decoding more than one frame isnot useful, because the cumulated time for thecycle cannot be separated into the individualper-frame decoding times needed for the LLSPsolver. Modifying the LLSP solver to handlesuch cumulated times would be possible, but de-pending on the amount of frames per cycle thatshould be supported, this would at least doublethe amount of columns in the accumulated ma-trix, which in turn would increase the executiontime of the LLSP finalization by factor 24 = 16.With the decoder implementations used here,this situation occurs only once at the beginningof every video, so it is too rare to justify sucha high price. I decided to simply ignore decodercycles decoding more than one frame by invali-dating the predictor, so the learning silently failsfor this cycle.

In prediction mode, the predicted frame decod-ing times for the multiple frames can simply beadded to estimate the decoder cycle time. Theother case of a decoder cycle not decoding any-thing is not interesting at all, because the exe-cution time for this step is negligible.

However, one decoder cycle outputting oneframe is still the common case. Otherwise, thedecoder would not be well-suited for real timeapplications, which would want to schedule thedecoding of each frame individually.

6 Results

6.1 Prediction Accuracy and Overhead

I will now present the resulting decoding timepredictions on real-life video material and alsodiscuss the prediction overhead. BecauseMPEG-4 part 2 was the primary focus of mywork, I will begin with that, but will also addremarks on MPEG-1/2.

33

Page 34: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6 ResultsTi

me

(m

s)

Frame number

Predicted

Measured

1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 20000

1.6

3.2

4.8

6.4

8.0

9.6

Figure 19: Prediction of stochastical load fluc-tuations for Voyager video.

6.1.1 MPEG-4 part 2

I used small snippets of four MPEG-4 part 2videos to train the coefficients. The videos arethe two IBM commercials and the two movietrailers from Table 1 on page 18. They differ intheir resolution, which is necessary because oth-erwise the influence of the pixel count cannotbe calculated properly. They also cover boththe simple and advanced simple visual profilesof MPEG-4 part 2 and different data rates, sothe feature combinations of the algorithm aresufficiently represented. I did not use the entirevideos but always the first 500 frames of each.This ensures an equal length so that no videowould outweigh the others. In addition to that,the videos for learning and for the actual predic-tion need to fit into memory to play them backin VERNER.

I played the four clips in VERNER’s learningmode on a 700MHz AMD Athlon machine tocalculate the coefficients. In another VERNERinstance, this time in prediction mode, I used thederived coefficients to predict the frame decod-ing times for the Voyager simple visual profilevideo. The results can be seen in Figures 20 to23.

With relative errors [30] from -0.1050538 to0.1978610 with an average of 0.0283729 at astandard deviation of 0.0443204 and absoluteerrors [31] from -0.746 ms to 0.909 ms with an

average of 0.1546128 ms at a standard deviationof 0.2575455ms, the prediction is quite accu-rate. In particular, I can predict not only thelong-term behavior of the decoding time, butalso the short-term fluctuations as can be seenenlarged in Figure 19. [1] uses the terms“struc-tural load fluctuation” and “stochastical loadfluctuation” for these long-term and short-termvariations respectively. I believe, now that theycan both be predicted, the term “stochastical”should be reconsidered.

I also wanted to test the prediction quality un-der different conditions, so I used small com-mand line utilities to take the learning and pre-dicting functionality out of VERNER. I usedthese tools on a PowerPC system (PowerBookG4 1.33GHz) running Mac OS X 10.4, trainingthe coefficients with the same set of short learn-ing clips and then predicting the decoding timesfor the Amazon advanced simple profile highdefinition video. I think the results in Figures 24to 27 are outstanding. Remember that this con-figuration is a completely different operating en-vironment, with a different machine architec-ture, including different cache behavior and en-dianness, as well as a different compiler (gcc-4.0 as opposed to gcc-3.2 used for VERNER).Further, the coefficients have been derived fromstandard definition content and are now appliedto a high definition video with about four timesthe frame size, but the prediction still works.The relative error ranges from -0.4312217 to0.1645416 with an average of 0.0324660 at astandard deviation of 0.0448016. The absoluteerror spans -25.1316 ms to 17.533 ms with anaverage of 3.3054763 ms at a standard devia-tion of 4.6638761 ms. The two individual peaksin the histograms are most likely due to a slightnonlinearity between one or more of the met-rics and the execution time. We have seen suchnonlinear behavior in Subsection 3.3. Now, withthe high resolution of the video, this nonlinear-ity is scaled up and is reflected in the long-termfluctuations of the absolute error.

It should be noted that systems with a morechaotic scheduling due to running background

34

Page 35: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6.1 Prediction Accuracy and Overhead

Frame number

Tim

e (

ms)

0

2

4

6

8

10

12

..

200 400 600 800 1000 1200 1400 1600 1800 20000

Figure 20: Measured frame decoding times for Voyager video.

Absolute errorPredicted

Frame number

Tim

e (

ms)

−1

1

3

5

7

9

11

200 400 600 800 1000 1200 1400 1600 1800 20000

Figure 21: Predicted frame decoding times and absolute errors for Voyager video.

Relative error

Fra

me

co

unt

0

10

20

30

40

50

60

70

−0.11 −0.07 −0.03 0.01 0.05 0.09 0.13 0.17 0.21

Figure 22: Histogram for the relative errorof Voyager video decoding timeprediction.

Absolte error (ms)

Fra

me

co

unt

0

10

20

30

40

50

60

70

−0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 10

Figure 23: Histogram for the absolute errorof Voyager video decoding timeprediction.

35

Page 36: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6 Results

Frame number

Tim

e (

ms)

400 800 1200 1600 2000 2400 280000

20

40

60

80

100

120

140

.

Figure 24: Measured frame decoding times for Amazon video.

Absolute errorPredicted

Frame number

Tim

e (

ms)

−30

−10

10

30

50

70

90

110

130

150

400 800 1200 1600 2000 2400 28000

Figure 25: Predicted frame decoding times and absolute errors for Amazon video.

Relative error

Fra

me

co

unt

−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.20

20

40

60

80

100

120

140

160

180

200

Figure 26: Histogram for the relative errorof Amazon video decoding timeprediction.

Absolute error (ms)

Fra

me

co

unt

0

20

40

60

80

100

120

140

160

−30 −20 −10 0 10 20

Figure 27: Histogram for the absolute errorof Amazon video decoding timeprediction.

36

Page 37: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6.1 Prediction Accuracy and Overhead

Relative prediction overhead

Fra

me

co

unt

0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.240

100

200

300

400

500

Figure 28: Histogram for the overhead of Ama-zon video decoding time prediction.

processes are not suitable for learning or pre-diction, because the measured times will showscheduling artifacts when the predictor or de-coder are preempted. This can be solved byeither enhancing the operating system with re-altime properties or using a timebase that doescompensate for the time where the predictionprocess is interrupted.

The overhead introduced by the metrics extrac-tion and the prediction can be seen in Figure 28.The average overhead is 5.6 %. The main sourcefor the overhead is the bitstream parsing andmacroblock decompression. As we have seen inFigure 6 on page 18, an additional bitstreamparser would account for an average overheadof 16 %, so optimization by omitting unnec-essary code has already been done. Unfortu-nately, further reduction of the overhead is diffi-cult, because to extract metrics like macroblockcounts, we have to look at each individual mac-roblock. However, skipping over single mac-roblocks is only possible by completely decom-pressing them. Because they are coded accord-ing to a variable length table, their size cannotbe determined in advance. A possible approachis to split the decoder in half and parse and de-compress the bitstream in the first half. Themetrics can then be extracted from the prepro-cessed data and the second decoder phase woulddo the rest of the decoding, using the already de-compressed macroblocks as an input. I wanted

Predicted

Measured

Tim

e (

ms)

Frame number

580 590 600 610 620 630 640 660570 6500

4

8

12

16

20

24

.

.

Figure 29: Prediction of stochastical load fluc-tuations for movie trailer.

to avoid such constructions because they usuallyrequire heavy modifications to decoder code, sonew decoder implementations would be difficultto deploy. Altenbernd, Burchard, and Stapperthave taken this approach in [3], and they alsoended up with overheads of 4-9%, dependingon the video, so there may not be much benefitin pursuing this.

Another way to reduce the overhead would beto not decode the motion vectors that we useto predict the temporal prediction step as dis-cussed in Subsection 3.3.7 on page 20. Thiswould lower the overhead from 5.6% to 4.1 %,but also reduces the quality and transferabilityof the prediction coefficients.

6.1.2 MPEG-1/2

From the available MPEG videos in Table 2,I used the Sokolovski clip and the first 500frames of the Antz scene to train the predictor.These two videos have different resolutions andcover both MPEG-1 and MPEG-2.The predic-tion for the MPEG-2 movie trailer in VERNERis shown in Figures 29 to 33. The averagerelative error is -0.0498824 at a standard de-viation of 0.0426241, the average absolute er-ror -0.7198970ms at a standard deviation of0.5429565 ms. I had to shorten the video tofit into the testing machine’s memory.

37

Page 38: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6 Results

Frame number

Tim

e (

ms)

.

100 200 300 400 500 600 700 800 90000

4

8

12

16

20

Figure 30: Measured frame decoding times for movie trailer.

Absolute errorPredicted

Frame number

Tim

e (

ms)

100 200 300 400 500 600 700 800 9000−3

1

5

9

13

17

21

Figure 31: Predicted frame decoding times and absolute errors for movie trailer.

Relative error

Fra

me

co

unt

0

4

8

12

16

20

24

28

32

36

40

−0.20 −0.16 −0.12 −0.08 −0.04 0.04 0.08 0.120

Figure 32: Histogram for the relative error ofmovie trailer decoding time pre-diction.

Absolute error (ms)

Fra

me

co

unt

0

4

8

12

16

20

24

28

32

−2.4 −2.0 −1.6 −1.2 −0.8 −0.4 0.4 0.8 1.2 1.60

Figure 33: Histogram for the absolute er-ror of movie trailer decoding timeprediction.

38

Page 39: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6.1 Prediction Accuracy and Overhead

Frame number

Tim

e (

ms)

200 400 600 800 1000 1200 1400 1600 1800 200000

2

4

6

8

10

12

.

Figure 34: Measured frame decoding times for movie trailer.

Frame number

Tim

e (

ms)

Predicted Absolute error−4

−2

0

2

4

6

8

10

200 400 600 800 1000 1200 1400 1600 1800 20000

Figure 35: Predicted frame decoding times and absolute errors for movie trailer.

Relative error

Fra

me

co

unt

0

20

40

60

80

100

120

140

−0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0.1 0.2 0.3 0.40

Figure 36: Histogram for the relative error ofmovie trailer decoding time pre-diction.

Absolute error (ms)

Fra

me

co

unt

−4 −3 −2 −1 0 1 20

20

40

60

80

100

120

Figure 37: Histogram for the absolute er-ror of movie trailer decoding timeprediction.

39

Page 40: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6 Results

video name duration resolution size codec properties

Sokolovski 0:13 min 192×144 636 KB MPEG-1 highly compressed thumbnail video

Antz 1:20 min 720×576 64 MB MPEG-2 DVD movie credit scene, low in movement

movie trailer 1:13 min 720×576 43 MB MPEG-2 dynamic, fast cuts, DVD quality material

Table 2: The MPEG-1/2 sample videos used to test the prediction.

Again, I cross-checked my method on the Pow-erPC CPU, also predicting the MPEG-2 movietrailer (Figures 34 to 37). The average rela-tive error here is -0.0155866 at a standard de-viation of 0.0985084, the average absolute er-ror -0.1121871ms at a standard deviation of0.4853568 ms. Unfortunately, the average over-head introduced by the prediction is 18.6%, butprofiling of the metrics extraction code showedthat 83 % of the execution time is spent on bit-stream parsing and decompression, which is re-quired for reasons presented previously. Motionvector decoding and glue code accounts for therest. Not accounting for the IDCT shortcut (seeSubsection 3.3.9 on page 22) would lower theoverhead to 17.2 %, but also degrade the pre-diction quality. I can only explain the higheroverhead with the lower complexity of MPEG-1/2 compared to MPEG-4 part 2, resulting in ahigher portion of time used for parsing than foractual calculation.

Although small parts still could use improve-ment, I think the overall goal of accuratelypredicting decoding times for MPEG-1/2 andMPEG-4 part 2 in both the simple and advancedsimple profiles has been accomplished. The pre-diction coefficients derived from one set of learn-ing videos can be applied to a wide range ofcontent, which is critical to future applications’usability, because it reduces the amount of learn-ing necessary for good results.

6.2 Remaining Issues

There are several implementation details thatcould be improved upon:

• The prediction overhead could possibly bereduced by applying further optimization.

• VERNER’s data delivery could be modifiedto not spread the data of one frame acrossmultiple chunks.

• VERNER’s user interface should providebetter feedback on the learning mode.

• The on disk storage format of the coeffi-cients is inefficient. A more sophisticatedfile format needs versioning and should bemore dense.

• It should be possible to run the finalizationstage of the LLSP solver in a backgroundthread. The required data should be prop-erly locked.

• The amount of metrics in use is currentlylimited.

Another problem is decoder specific and mightbecome more pressing as more complex de-coders appear: frame reordering. Decoder im-plementations where the frame decoding timeand the decoder cycle time become even moreindependent of one another pose difficult prob-lems. If the decoder does not return controlafter decoding a frame, the measuring of de-coding times for individual frames and also theentire per-frame scheduling of a realtime appli-cation would require additional work or modifi-cations to the decoder implementation. Currentdecoder implementations decode 0 to 2 framesper decoder cycle, because the algorithms ex-amined here use one forward reference frame atmaximum. Future decoders with heavy inter-frame dependencies will do even more frame re-ordering, so to output one frame, the decodermight need to decode an arbitrary number offrames. Decoder implementations should return

40

Page 41: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

6.3 Conclusion

control after every decoded frame instead of ev-ery frame ready for display.

Future decoders’ complexity itself can also bea problem. The MPEG-4 standard itself ishuge, ranging from audio and video compres-sion over container formats to interactive 3Dscenery. The concepts are still separated into in-dividual compression techniques, but future de-coders might want to unify here. At some point,there could just be too much bitstream parsingrequired to obtain the metric values, increasingprediction overhead beyond sensible limits. Atthis point, it will be interesting to pre-determinethe metrics already during encoding and em-bed them at prominent positions inside the bit-stream. The MPEG-4 part 2 bitstream alreadycontains this concept in a complexity estima-tion header, which stores information like mac-roblock and DCT coefficient counts. Unfortu-nately this header is optional and the commonencoder implementations do not make use of it,so almost no publicly available MPEG-4 part 2video stream includes this header. Future videobitstreams and container formats should con-sider this and assign header space for mandatorymetrics storage.

6.3 Conclusion

Several areas are still open for future research.First of all, when H.264 has matured, it wouldbe interesting to see, how it fits into the de-coder model and prediction system presentedhere. Because H.264 is so complex, it will bedifficult to cover all features the algorithm pro-vides with appropriate metrics. Another chal-lenge is to further automate the implementationof new algorithms like it has already been donewith the column dropping. One could think ofautomatic derivation of metrics from profilingruns and function call frequency. This mighteven lead to results for codecs only available inbinary form.

On the other hand, the prediction could be mademore precise and it would be an interesting re-search subject to include source code analysis

into my method to completely avoid any under-estimation in the predicted decoding times.

Transferability is another area that could beimproved upon. The coefficients calculatedfrom learned decoding times on one architec-ture could be used on another architecture bycompensating architectural features like proces-sor speed and cache sizes. Even on the samearchitecture, further analysis of caching pattersmight improve the prediction, when coefficientslearned from small videos are being applied tohigh definition content.

Another approach would be to redesign decoderimplementations to return control after each de-coded slice. I am confident that my predictionmethod can be adapted to predict executiontimes on a per-slice basis. If compression al-gorithms assign priorities to slices, a slice-basedscheduling for the decoder is worth investigat-ing. This would also solve the problem withdecoding time prediction for libmpeg2.

Despite these issues, a system to predict de-coding times with reasonable accuracy and ac-ceptable overhead has been designed and imple-mented. The prediction relies on preprocessingand statistical evaluation of training runs ratherthan requiring heavy source code analysis or de-coder modifications. This ensures that the pre-sented results will not be obsoleted by decoderdevelopment such as code optimizations, be-cause the method is independent of the specificdecoder code. The integration into VERNERmakes the software easy to use by allowing coef-ficient learning with a few mouseclicks. The pre-diction results can be examined as comprehen-sive RT Mon online diagrams. When DROPSin general and VERNER in particular grow fur-ther, the usability of the current solution can beimproved with background threads for the cal-culation work to not interrupt the user.

While the prediction of decoding times itself isalready interesting, the results presented hereshould be regarded in a larger context. It ismy firm belief that future operating systems willhave realtime properties, because only such a

41

Page 42: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

References

development can solve problems like guaranteeddatarates to a DVD recorder or click-free au-dio playback cleanly and reliably. The sched-ulers of these operating systems will benefit fromknowing resource usages in advance to makefavorable scheduling decisions. My work canhelp to provide that and can be nicely com-plemented by research on perception models,which can help assign benefit values to videoframes. This would allow video frames to com-pete for the CPU resource on the basis of atrue price-performance ratio, resulting in opti-mal video presentation even in high load situa-tions to really improve the user’s experience.

References

[1] Clemens C. Wust, Liesbeth Steffens, Rein-der J. Bril, Wim F.J. Verhaegh: QoS Con-trol Strategies for High-Quality Video Pro-cessing. In Proceedings of the 15th Eu-romicro Conference on Real-Time Systems(ECRTS).

[2] Damir Isovic, Gerhard Fohler: Quality awareMPEG-2 Stream Adaptation in ResourceConstrained Systems. In Proceedings of the16th Euromicro Conference on Real-TimeSystems (ECRTS).

[3] Peter Altenbernd, Lars-Olof Burchard,Friedhelm Stappert: Worst-Case ExecutionTimes Analysis of MPEG-2 Decoding. InProceedings of the 12th Euromicro Confer-ence on Real Time Systems (ECRTS).

[4] Andy Bavier, Brady Montz, Larry L. Pe-terson: Predicting MPEG Execution Times.In SIGMETRICS, 1998. Proceedings of thejoint international conference on measure-ment and modeling of computer systems.

[5] Claude-Joachim Hamann, Jork Loeser, LarsReuther, Sebastian Schonberg, Jean Wolter,Hermann Hartig: Quality Assuring Schedul-ing – Deploying Stochastic Behavior to Im-prove Resource Utilization. In Proceedings

of the 22th IEEE Real-Time Systems Sym-posium (RTSS-XXII).

[6] ISO/IEC 11172-2: Coding of moving pic-tures and associated audio for digital stor-age media at up to about 1,5 Mbit/s – Part2: Video

[7] ISO/IEC 13818-2: Generic coding of movingpictures and associated audio information –Part 2: Video

[8] ISO/IEC 14496-2: Coding of audio-visualobjects – Part 2: Visual

[9] ISO/IEC 14496-10: Coding of audio-visualobjects – Part 10: Advanced Video Coding

[10] TandbergTV diagram on MPEG-2 codingefficiency http://www.m4if.org/public/documents/vault/m4-out-20027.pdf,page 8

[11] MPEG-4 part 2 visual profileshttp://www.m4if.org/public/documents/vault/m4-out-30037.pdf

[12] http://www.uni-koeln.de/WRAKG/Berichte/Klassifikation/node203.html

[13] Apple H.264 FAQ http://www.apple.com/mpeg4/h264faq.html

[14] Digital Video http://de.wikipedia.org/wiki/Digital_Video

[15] SVQ3 and WMV9 are related to H.264http://de.wikipedia.org/wiki/H.264#Verwandte_Verfahren

[16] PNG compression filters http://www.libpng.org/pub/png/pngintro.html#filters

[17] The Dresden Real-Time OperatingSystems Project http://os.inf.tu-dresden.de/drops/overview.html

[18] Fiasco microkernel http://os.inf.tu-dresden.de/fiasco/

42

Page 43: Principles for the Prediction of Video Decoding Times ...os.inf.tu-dresden.de/papers_ps/roitzsch-beleg.pdf · Principles for the Prediction of Video Decoding Times applied to MPEG-1/2

References

[19] Carsten Rietzschel: VERNER – einVideo EnkodeR uNd playER fur DROPShttp://os.inf.tu-dresden.de/papers_ps/rietzschel-diplom.pdf

[20] libmpeg2 project http://libmpeg2.sourceforge.net/

[21] XviD project http://www.xvid.org/

[22] David A. Huffman: A method for the con-struction of minimum redundancy codes. InProceedings of the IRE, 1952.

[23] formula for the discrete cosine trans-form http://www.bretl.com/mpeghtml/DCxfrm.HTM

[24] J. Stor, R. Bulirsch, Introduction to Nu-merical Analysis, Springer-Verlag, 1980.Subsection 4.8.1: Linear Least Squares. TheNormal Equations

[25] J. Stor, R. Bulirsch, Introduction to Nu-merical Analysis, Springer-Verlag, 1980.Subsection 4.8.2: The Use of Orthogo-nalization in Solving Linear Least-SquaresProblems

[26] J. Stor, R. Bulirsch, Introduction to Nu-merical Analysis, Springer-Verlag, 1980.Subsection 4.3: The Cholesky Decomposi-tion

[27] J. Stor, R. Bulirsch, Introduction to Nu-merical Analysis, Springer-Verlag, 1980.Subsection 4.8.3: The Conditioning of theLinear Least-Squares Problem

[28] J. Stor, R. Bulirsch, Introduction to Nu-merical Analysis, Springer-Verlag, 1980.Subsection 4.7: Orthogonalization Tech-niques of Householder and Gram-Schmidt

[29] J. Stor, R. Bulirsch, Introduction to Nu-merical Analysis, Springer-Verlag, 1980.Subsection 4.9: Modification Techniques forMatrix Decompositions

[30] Relative Error http://mathworld.wolfram.com/RelativeError.html

[31] Absolute Error http://mathworld.wolfram.com/AbsoluteError.html

[32] GNU general public license http://www.gnu.org/licenses/gpl.txt

43