6
An Efficient Streaming Method in Overlay Multicast Networks Suk Kim Chin School of Business & Informatics Australian Catholic University, PO BOX 968, North Sydney NSW 2059, Australia Kim.Chin@acu.edu.au Abstract— The predominance of unicast connectivity in the current Internet infrastructure sabotages the widespread deployment of video applications. As such overlay multicast networking has been proposed to accelerate the ubiquity of video streaming in the Internet. Based on an existing overlay multicast network architecture, this paper proposes a method which aims at improving the efficiency of the streaming scheme that is part of this architecture. Instead of sending multiple streams of the same data across the network, only the essential video signals are sent in redundancy. Evaluation of the method generates promising results in preventing loss of the essential video data in burst losses of packets. KeywordsOverlay multicast, video streaming, interleaving data, layered coding, base and enhancement data I. INTRODUCTION For the majority of Internet user community, the most appealing applications are audio and video. "Video is the ideal fit for the Internet. While text and pictures do well to convey ideas, video provides the most natural, comfortable, and convenient method of human communications. Even the least dynamic examples of video reveal infinitely more than the audio-only versions" [1]. The original Internet infrastructure cannot support the requirements of desirable emerging applications such as Internet TV. With the exponential increase of its user community, it is also impossible for the current Internet to support the diverse range of group and collaboration applications (such as video conferencing and interactive multimedia educational systems) without driving it into momentary meltdowns. The nature and diversity of these group applications require the underlying system to support varying demands in the dimensions of bandwidth, latency, reliability, multiple sources, scalability and system dynamics. As the Internet is predominated by unicast connectivity, it is incapable of satisfying these requirements. Thus IP multicast [2] was proposed in the late 1980’s to support transmission of audio / video data more efficiently than unicast. Unicast transmits one copy of data separately to each member of a group whereas a multicast data source only sends one copy of data, which is replicated as necessary when propagating in the communication channels towards the receivers. Thus group communications are enabled without traffic explosion in the network. However, deployment of IP multicast is still non-ubiquitous due to a multitude of issues, including the complexity of its protocols. Frustrated with the lack of success in widespread deployment of IP multicast, researchers have turned to deploying multicast technology in the application layer, resulting in the emergence of a myriad of techniques collectively called overlay / application layer multicast. These techniques are designed to give versatile multi-dimensional support through the underlying transport system. This paper proposes a method for improving the efficiency of overlay multicast delivery of video data across the Internet. The following section details the background and related work. Section III presents the method. Section IV describes the experiments and results. Section V draws the conclusion to this study. II. BACKGROUND Exploiting the human visual system’s insensitivity to the high frequency components of video signals [3], the proposed method utilizes layered coding to separate the incoming video signals into high and low priority data. Then by capitalizing on the unordered delivery property of the Internet transmission protocol, the high-priority packets are interleaved with low-priority packets to form an interleaved packet stream to be transmitted across the network. A. Layered Coding This section defines layered coding, base and enhancement layers, base and enhancement packets, and base and enhancement packet sequences, using mathematical representations. Let {F 1, F 2, …} represent a sequence of video frames with F x [0, 255] 640×480 for grayscale National Television System(s) Committee (NTSC) video. Then, layered coding is defined as finding an encoder E, which can map any frame, F x , for example, into L layers [4]. That is, } ,... , { : ) 1 ( ) 1 ( L E x E x B x x l l l F E where B x l is the base layer and ) 1 ( E x l to ) 1 ( L E x l are the enhancement layers. ) 1 ( E x l is the first enhancement layer and ) 1 ( L E x l is the last. It is assumed L = 1, 2, … 10. Now that the input video signals have been separated into different layers, an opportunity exists for creating the base and Proceedings of the 2009 IEEE 9th Malaysia International Conference on Communications 15 -17 December 2009 Kuala Lumpur Malaysia 978-1-4244-5532-4/09/$26.00 ©2009 IEEE 78

[IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

  • Upload
    suk-kim

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

An Efficient Streaming Method in Overlay Multicast Networks

Suk Kim Chin School of Business & Informatics

Australian Catholic University, PO BOX 968, North Sydney NSW 2059, Australia [email protected]

Abstract— The predominance of unicast connectivity in the current Internet infrastructure sabotages the widespread deployment of video applications. As such overlay multicast networking has been proposed to accelerate the ubiquity of video streaming in the Internet. Based on an existing overlay multicast network architecture, this paper proposes a method which aims at improving the efficiency of the streaming scheme that is part of this architecture. Instead of sending multiple streams of the same data across the network, only the essential video signals are sent in redundancy. Evaluation of the method generates promising results in preventing loss of the essential video data in burst losses of packets. Keywords— Overlay multicast, video streaming, interleaving data, layered coding, base and enhancement data

I. INTRODUCTION For the majority of Internet user community, the most

appealing applications are audio and video. "Video is the ideal fit for the Internet. While text and pictures do well to convey ideas, video provides the most natural, comfortable, and convenient method of human communications. Even the least dynamic examples of video reveal infinitely more than the audio-only versions" [1]. The original Internet infrastructure cannot support the requirements of desirable emerging applications such as Internet TV. With the exponential increase of its user community, it is also impossible for the current Internet to support the diverse range of group and collaboration applications (such as video conferencing and interactive multimedia educational systems) without driving it into momentary meltdowns. The nature and diversity of these group applications require the underlying system to support varying demands in the dimensions of bandwidth, latency, reliability, multiple sources, scalability and system dynamics.

As the Internet is predominated by unicast connectivity, it is incapable of satisfying these requirements. Thus IP multicast [2] was proposed in the late 1980’s to support transmission of audio / video data more efficiently than unicast. Unicast transmits one copy of data separately to each member of a group whereas a multicast data source only sends one copy of data, which is replicated as necessary when propagating in the communication channels towards the receivers. Thus group communications are enabled without traffic explosion in the network. However, deployment of IP multicast is still non-ubiquitous due to a multitude of issues,

including the complexity of its protocols. Frustrated with the lack of success in widespread deployment of IP multicast, researchers have turned to deploying multicast technology in the application layer, resulting in the emergence of a myriad of techniques collectively called overlay / application layer multicast. These techniques are designed to give versatile multi-dimensional support through the underlying transport system.

This paper proposes a method for improving the efficiency of overlay multicast delivery of video data across the Internet. The following section details the background and related work. Section III presents the method. Section IV describes the experiments and results. Section V draws the conclusion to this study.

II. BACKGROUND Exploiting the human visual system’s insensitivity to the

high frequency components of video signals [3], the proposed method utilizes layered coding to separate the incoming video signals into high and low priority data. Then by capitalizing on the unordered delivery property of the Internet transmission protocol, the high-priority packets are interleaved with low-priority packets to form an interleaved packet stream to be transmitted across the network.

A. Layered Coding This section defines layered coding, base and enhancement

layers, base and enhancement packets, and base and enhancement packet sequences, using mathematical representations. Let {F1, F2, …} represent a sequence of video frames with Fx ∈ [0, 255]640×480 for grayscale National Television System(s) Committee (NTSC) video. Then, layered coding is defined as finding an encoder E, which can map any frame, Fx, for example, into L layers [4]. That is,

},...,{: )1()1( −→ LEx

Ex

Bxx lllFE

where B

xl is the base layer and )1(Exl to )1( −LE

xl are the

enhancement layers. )1(Exl is the first enhancement layer and

)1( −LExl is the last. It is assumed L = 1, 2, … 10.

Now that the input video signals have been separated into different layers, an opportunity exists for creating the base and

Proceedings of the 2009 IEEE 9th Malaysia International Conference on Communications 15 -17 December 2009 Kuala Lumpur Malaysia

978-1-4244-5532-4/09/$26.00 ©2009 IEEE 78

Page 2: [IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

enhancement packets. The base packet sequence, S(Bx) = {Bx(1), Bx(2), … Bx(N)} is created from B

xl . The enhancement packet sequence, S(Ex) = {Ex(1,1),… Ex(N,1), Ex(1,2),… Ex(N,2), Ex(1,3),… Ex(N,3), … Ex(1,L-1), … Ex(N,L-1)} is the concatenation of enhancement packet sub-sequences generated from )1(E

xl to )1( −LExl , giving a total of (L-1)N packets

where N is the maximum number of packets that can be generated from a layer, base or enhancement. That is, S(Ex) is made up of all the enhancement packets generated from each of the (L-1) enhancement layers.

The enhancement packet subsequence SEx(1) = {Ex(1,1), Ex(2,1), … Ex(N,1)} is generated from )1(E

xl , SEx(2) = {Ex(1,2),

Ex(2,2), … Ex(N,2) } from )2(Exl and )1( −LE

xl generates the packet subsequence, SEx(L-1) = {Ex(1, L-1), Ex(2, L-1), … Ex(N, L-1) }. The subsequences SEx(1), SEx(2), SEx(3), … SEx(L-1) make up S(Ex) where SEx(1) is the enhancement sequence from enhancement layer one, SEx(2) from enhancement layer two and so on.

Upon arriving at the receivers, a decoding process maps a subset, M of L layers into M

xF~ , a reconstructed frame:

Mx

LEx

Ex

Bx FlllD ~}~,...~,~{: )1()1( →− .

M < L if one or more layers, base or enhancement, have been lost en route to the receivers, otherwise M = L. B

xl~

is the base

layer made up of the subset, N~ , of N base packets, N~ ≤ N.

That is, N~ < N if some base packets have been lost or delayed for longer than the Time To Live (TTL)1, otherwise

N~ = N. Each of the enhancement layers )1(~ Exl through

)1(~ −LExl is made up of a subset, N̂ y of N enhancement packets

with N̂ y ≤ N where y = E(1), E(2), …, E(L-1). That is, N̂ E(1)

gives the number of packets received by an end-system or a

set of end-systems with N̂ E(1) ≤ SEx(1). N̂ y < N if some

enhancement packets from any of the enhancement layers have been lost travelling through the communication channel, N̂ y = N otherwise. For example, the packet stream received by zen (machine name) from Missouri, shown in Fig. 2, is a subset of the packet stream depicted by Fig. 1. If the packet stream represented by Fig. 1 is denoted by P and that of Fig. 2 by P̂ , then P̂ < P and it is a subset of P.

The decomposition of video information gives rise to a set of layers which can be striped across multiple network channels {C1,… , CL} by sending layer B

xl over C1, )1(Exl over

C2 and so on. This layered transmission of decomposed video

1 Packets delayed for longer than TTL are considered lost.

signals has been used for congestion control in multicast networks [4, 5 & 6].

Fig. 1 Packet stream sent by WRN2

Fig. 2 Packet stream received by zen (machine name)

Layered coding has also been the “most popular and effective means of providing error resilience in a video transport system” when “combined with transport prioritization” [3]. Wang et al defines transport prioritization as “various mechanisms” for providing “different QoS in transport, including using unequal error protection, which provides different channel error/loss rate, and assigning different priorities to support different delay/loss requirements”. For example, ATM networks implement prioritization using one bit in the cell header which signals dropping low priority cells such as cells with content from

)1(Exl to )1( −LE

xl when traffic congestion occurs. Layered coding has been popular as a forward error

concealment technique because it can facilitate layered transmission. Likewise, it has been used for congestion control to provide gradual enhancement for images. This project employs layered coding to decompose input video signals so as to enable isolation of essential video data through packet interleaving and scrambling; layered coding has not been used to facilitate layered transmission in this study.

UDP/IP together offer an unreliable/unordered transport service where “unreliable” service means a “may-be-loss”, “may-be-duplicates” and “no-data-integrity” [7]. UDP could lose pieces of video/audio data or even an entire frame. For example, receivers spiff (from Sweden), ursa (from Germany), lupus (from Germany), excalibur (from California) and bagpipe (from Kentucky), each had a loss burst of 21 consecutive packets when they tuned into Radio Free Vat in California [8]. The 21 successive packets might be a frame or close to the data content of an entire frame. Since UDP offers no service other than port multiplexing and error detection, these losses are untreated.

To prevent loss burst of critical video information in such a contiguous manner, layered coding is used to separate the key video data from the less critical signals. This process facilitates interleaving of critical and non-critical data, which is possible because of the unordered delivery feature of UDP. Interleaving packets enhances the transport protocol by preventing the contiguous loss of critical video information. That is, it “transforms” contiguous losses into isolated losses.

2 WRN stands for World Radio Network.

1 2 3 ... 438 439 440 ...

1 3 3 ... 438 443 444 ...

79

Page 3: [IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

To enhance UDP further, Clark’s et al [9] Application Level Framing Principle (ALF) is employed in compressing video data into Application Data Units (ADUs). These ADUs can be processed immediately upon arrival at the decoder, enabling a faster progressive display of images on the receivers’ screens (refer to Fig. 3). Images (c) and (d) appear on the screen when most ADUs have been processed.

Fig. 3 Application data units (images)

Images (a) and (b) show that ADUs can be processed independently. Image (a) has all the enhancement data and the base data whereas image (b) has some missing enhancement information as there are some blurred patches on the image. Missing enhancement data produces image (c) and when critical data are lost, the result is image (d).

Fig. 4 Images produced by ADUs.

B. Application Layer Framing The flexible and efficient transmission of video data is

enabled through sharing common information between the application and the network protocol architecture. In ALF, the application divides the video data into path-MTU (Maximum Transmission Unit)-size pieces called ADUs, which are the units of communication and processing. Each ADU carries a small header describing how to decompress and where to place the content of the ADU at the end-system. Each ADU can then be processed independently, giving images like those in Fig. 4 (a) and (b). Comparing the patches labelled as A, B, C and D in (a) and (b) show that some enhancement information is missing in the blurred patches in (a).

C. Related Work Networks for content distribution require expensive

technologies due to the inherent bandwidth-intensive nature of video data [3]. IP multicast was designed to facilitate efficient transmission of video streaming thus easing the cost but its existence in the Internet is sparse. As such, there has been a flurry of activities in deploying multicast functionality in the

application layer. Unlike its network layer counterpart, application layer multicast is easy to deploy as it is independent of network infrastructure support [10]. This project joins the application / overlay multicast research activities by designing a method for efficient transmission of video data through the use of data interleaving.

The purpose of scrambling and interleaving data is to isolate errors due to packet loss, particularly multiple successive packet losses. Studies in [11, 12] used data interleaving techniques for isolating errors due to cell loss in an ATM network. As their methods are designed for an ATM network, prioritization capacity is assumed available. The underlying principle behind all of these methods is to randomize video data so that errors incurred as a result of packet loss are isolated. Tom et al [11] interleaved blocks of data randomly to conceal the effect of packet loss. Whilst the method is effective in randomizing lost information (spectrally and spatially) in an ATM environment because the size of a cell is very small, the above method would not be as effective with the Internet as packet sizes are significantly larger. This means the loss of a few consecutive packets would wipe out a lot more information, rendering error concealment less effective or even impossible. The same drawbacks apply to the methods described in [12].

To alleviate the excessive long delay incurred by the abovementioned method [11], Zhu et al [12] did odd/even block interleaving instead of random block interleaving. In an Internet environment, this would result in excessive information loss when several consecutive packets are lost as the odd block is very close to the even.

Chin's method for interleaving base and enhancement packets spreads out the base data as far as possible. This is ensured through scrambling the blocks of base data before putting them in a packet thus preventing loss of contiguous blocks of data.

III. THE PROPOSED METHOD The proposed method detailed herein is derived from Chin's

and Andreev's et al work on video data delivery across the communications channels. Chin's [13] work aims at reliable delivery of video data via biased protection of essential signals whereas Andreev's et al [14] work offers reliability through sending multiple redundant video streams which undermines the key objective of multicast technology, saving bandwidth. In order to facilitate biased protection towards base layer information, it is necessary to interleave base data with enhancement data.

A. Interleaving Base and Enhancement Packets To prevent consecutive base packets from burst loss, a

packet sequence is formed by deliberately interleaving a base packet with a short sequence of enhancement packets. If Bx(i) is the ith base packet and S1E = {Ex(j,k), Ex(j+4,k+1),…} is a sequence containing between 2 to 9 enhancement packets, where Ex(j,k) is the jth enhancement packet from the kth enhancement layer, then the resulting interleaved packet sequence is: Bx(i), {Ex(j,k) ,Ex(j+4,k+1),…}, Bx(i+1),

80

Page 4: [IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

{Ex(j+2,k+1),…Ex(j+r, k+s),…}, … where j+r ≤ N, k+s ≤ (L-1), i = 1,2,…N, j = 1,2,…(L-1)N and k = 1,2,…(L-1). J = 1, 2, 3,… , (LN – N -1) and r, s are elements of J. The terms in the enhancement packet sequence S1E are random packets taken from S(Ex). A video stream can be split into one base and up to nine enhancement layers. The enhancement layers consist of high and medium frequency components [12].

The sequences of enhancement packets are randomized such that the corresponding base and enhancement packets are never placed next to each other. Ex(3,2) is the corresponding enhancement information for Bx(3); the third enhancement packet from layer two enhances the video data in the third base packet. To meet this condition, j must not equal i. For example, if i = 3, then a valid {Ex(j,k), Ex(j+5,k+1), Ex(j+7,k+2), Ex(j+3,k)} to be inserted in between Bx(3) and Bx(4) is {Ex(5,1), Ex(10,2), Ex(12,3), Ex(8,1)} and if j=5, k=1 and r=3, s=1 then {Ex(j+2,k+1),…Ex(j+r, k+s),…} could be {Ex(7, 2), Ex(13, 3), Ex(8,2), Ex(11, 2)}, for L=5. Note that r, s can be any integer as long as the resultant enhancement packets do not correspond either to each other or to the base packets. This is so that if Bx(3) were lost, say, then Ex(3,1) or Ex(3,2) is still available for enhancing the reconstructed signals. Zhu et al’s adaptive scheme, for varying “interpolation coefficients in spatial, temporal, as well as frequency domains”, “can effectively recover the lost low-frequency components while still preserving the received high-frequency contents” [12]. A rich assortment of signal reconstruction schemes is available, some of which may be extended to multicast networks; others can be modified [12].

It is anticipated that increasing the number of terms in {Ex(j,k),…} would lower the probability of consecutive base packets lost in a burst. For instance, a burst loss of four successive packets would incur at most one base packet loss in the packet sequence: Bx(i), {Ex(j,k) ,Ex(j+4,k+1), Ex(j+7,k+1), Ex(j+r, k+s)}, Bx(i+1), {Ex(j, k+1),…Ex(j+r+3, k+s),…}, …. However, if the number of terms in the enhancement subsequence were reduced to one or two, then there is a probability of two consecutive base packets being lost in a burst loss of four packets. For example, if the enhancement sequence (in the case of a two-layered source coder with only two layers being generated from the incoming video signals) consists of only one term, then the interleaved packet sequence can be {Bx(i), Ex(j,k), Bx(i+1), Ex(j+7,k+1),…}. Therefore, a burst loss of four consecutive packets would incur a loss of two base packets.

B. The Method The method proposed herein aims at improving Andreev's

et al work [14]. Their overlay multicast network is designed to transmit video signals from the encoder to the users with an attempt to ease bottlenecks in the server and the network. However, one of their design requirements is to enforce reliability through transmitting multiple copies of a video stream to the recipients. It is questionable whether the aforementioned bottlenecks can be eased given that video is highly bandwidth intensive; multiple copies of a video stream would surely consume a lot of bandwidth, thus aggravating

the bottlenecks. As such this paper proposes a method for enhancing reliability by sending just the essential video signals in multiple copies and the enhancement data in a single copy thus avoiding violation of Andreev's et al design objective of "minimizing cost".

The overlay network architecture designed by Andreev et al consists of the following components (distributed across the Internet globally):

• entrypoint -- the video stream enters the overlay network through this point and it receives the encoded data from the encoder;

• reflector -- receives data from the entrypoint and delivers a copy to the edgeserver; and

• edgeserver -- receives copy / copies of data stream from the reflector and then "reassembles" the data to create a good copy of the signal stream to be delivered to the end-user.

Utilizing the above overlay network architecture, this project employs a layered encoder to segregate the high frequency components (enhancement) of the video signals from the low frequency components (base) counterparts, creating one base layer and multiple enhancement layers. This enables transmission of video data with biased protection towards the essential signals, as well as saving bandwidth. Instead of delivering multiple copies of the same stream, the

Fig. 5 Overlay multicast network architecture high and low data are packetized into base and enhancement packets. At the entrypoint, the base packets are interleaved with the enhancement packets in the manner described in Section III A. This interleaved stream is sent to the reflector. To ensure good quality video, an additional base layer stream is delivered to the reflector. Finally, the interleaved stream and the base layer are both transmitted to the edgeserver which reconstructs the stream, if any base data are lost, it will take them from the base layer. This stream is sent to the end-user, refer Fig. 5. The stream labelled E is the enhancement; the ones labelled B and B+E are the base and interleaved streams respectively. The ALF principle path-MTU-size is used at the end-user for displaying the image data on the screen.

Layered Encoder

E E B E E B E E B

B + EB + E B + EB + E

B + EB + E

B

B B B B

B

B + E B + E

B

B

B

B

B

B + E B + E B + E

Source

Reflector

Edge Server

81

Page 5: [IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

IV. EVALUATION Using loss vectors extracted from datasets taken from the

MBone by Yajnik et al [8], experiments were conducted on the method of interleaving base and enhancement packets. The datasets used are traces of packets received from World Radio Network (WRN) in Washington D.C. [8]. The datasets from traces of packets received from UC Berkeley Seminar (UCB) and Radio Free Vat (RFV), both in California, have also been extracted to be loss vectors.

It is to be noted that these datasets are taken from audio traces but their suitability for conducting the experiments in this study is justified by the fact that video data consume significantly more bandwidth than audio. As such the loss rate and characteristics would be much worse than these audio datasets. If the proposed method can improve on these loss characteristics then it can perform much better with video datasets which are hard to find because of the lack of real time video streaming in the Internet.

4 sets of experiments (only 2 of which will be discussed due to space limit) have been conducted to test the proposed method. For each of the experiments, it was assumed that the video signals have been divided into multiple streams and that blocks of essential video data within each packet have been scrambled. Two of the 4 sets of experiments are:

(1) Interleaving with two enhancement packets: a hypothetical packet sequence can be in the form Bx(1), {Ex(4,2) ,Ex(14,1)}, Bx(2), {Ex(22,1), Ex(8,1)}, …. For all the experiments, base packets are filled with base layer data and enhancement packets are packed with enhancement data.

(2) Interleaving four enhancement packets: a hypothetical packet sequence maybe in the form Bx(1), {Ex(4,4), Ex(14,1), Ex(24,2), Ex(9,1)}, Bx(2), {Ex(22,1), Ex(8,1), Ex(18,1), Ex(14,2)}, ….

ocarina (machine name) from Kentucky, tuned into World

Radio Network, Washington D.C. In a perfect communication channel, it expected to receive 45000 audio packets. The results of receiver ocarina represent all other receivers with losses of similar characteristics, which are:

• having loss rate of less than or equal to 6%; • short loss lengths of less than ten; • and isolated losses contribute to most of the

percentage loss.

It has one of the lowest percentage losses of 4.64% with a longest burst of 4, which is the shortest maximum burst recorded from a survey of all the datasets taken from Yajnik’s et al [8] MBone traffic measurements, apart from receiver law with source RFV, which has only isolated losses. Of the 4.64% losses recorded, 94.18 % are from isolated losses and burst losses make up the remaining 5.82%. There are only 3 different burst loss lengths, which are 2 (154 of them), 3 (18 of them) and 4 (5 of them).

With the proposed method of interleaving with 2 enhancement packets, receiver ocarina’s five bursts of four

consecutive packets are reduced to one. The longest burst loss remains at four. After applying the proposed methods of interleaving of base packets with 2 and 4 enhancement packets, the percentages of isolated losses increase to 94.99 and 95.98, respectively. Accordingly, for the 2-enhancement method, burst losses of lengths 2 and 4 decrease to 4.32 % (from 5.06%) and 0.10% (from 0.16%); percentage loss of length 3 remains the same. The results of using two enhancement packets for separating consecutive base packets are encouraging.

ocarina’s cumulative distribution of losses is depicted by the “…” line (Fig. 6). The cumulative distribution of base packet losses for the interleaving method with 2 enhancement packets is the line “xxx”. The cumulative distribution of the base packet losses for the interleaving method with 4 enhancement packets is depicted by the “***”. “- - -” presents the cumulative distribution of the loss of base packets of the interleaving variant with 6 enhancement packets. “ooo” represents the cumulative distribution of the base packet losses of the interleaving case with 9 enhancement packets.

Fig. 6 Results of ocarina using the Interleaving Method

Using basic probability theory, sample space and events, probability models relating the interleaved packet stream to the original packet stream received by a participant of the audio multicast session shown in Fig.2 can be created. If P(n) denotes the interleaved packet stream where n is the loss length and f(n) is the original packet sequence representing the actual traffic trace of a receiver collected from the MBone (refer to Fig. 2), a relationship can be established between P(n) and f(n) and for n =1 it is:

For n = 2, the model is:

which shows that for a burst loss of 2 consecutive base packets to occur, a burst of at least 4 contiguous packets from

)5()4()3()2()1()1( 31

32

32

31 fffffP ++++=

)8()7()6()5()4()2( 31

32

32

31 fffffP ++++=

100

101

102

0.94

0.95

0.96

0.97

0.98

0.99

1

Burst loss length

Cu

mu

lati

ve e

rro

rs

ocarina-g: WRNDec18(4.64% loss)Interleaved only: 2 enhancement packetsInterleaved only: 4 enhancement packetsInterleaved only: 6 enhancement packetsInterleaved only: 9 enhancement packets

82

Page 6: [IEEE 2009 IEEE 9th Malaysia International Conference on Communications (MICC) - Kuala Lumpur, Malaysia (2009.12.15-2009.12.17)] 2009 IEEE 9th Malaysia International Conference on

the interleaved stream has to occur. Bursts of 5 consecutive packets contribute more to 2-successive-base-packet losses than bursts of 4 contiguous packets. Every burst of 6 consecutive packets would give rise to a burst loss of 2 successive base packets. Bursts with loss length greater than 8 will not contribute to 2-base-packet losses.

V. CONCLUSIONS The video-streaming-research community has designed a

rich assortment of overlay, peer-to-peer, and application layer multicast (ALM) network architectures [14, 15, 16, & 17]. Based on the overlay multicast network architecture in [14], this study proposes a scheme to improve the streaming method of the architecture.

Implementation of the proposed method requires an efficient layered coder 3 which can segregate the incoming video signals into multiple layers, one of which is the base layer containing vital information and the rest are enhancement layers containing non- and/or less essential video data. This segregation of data introduces the ability to interleave, effectively isolating the essential information from the non-essential and/or less essential information. It is anticipated that addition of a scrambling process for both types of information distributes the base information even further, thus preventing excessive loss of contiguous base information when multiple consecutive packets are lost in bursts.

Regarding the issues of an ideal layered codec, McCanne’s PVH [4] codec is one possible candidate with suitable modifications and assuming that these modifications can be performed on the codec. This paper has also proposed a one-to-one mapping of ADU’s (Application Data Units) onto each of the base and enhancement packets, enabling each of the base and enhancement packets to be decoded independently of each other.

This and the related properties of ADU’s would be ideal for Internet video applications for a number of reasons:

• video data can be processed as soon as they arrive at the receiver end, thus providing faster progressive display of image;

• reduced buffer size for the receivers; • reduced jitter at the receiving application • and a graceful degradation of video quality when

network performance deteriorates.

Further, unequal error control treatments can be exercised by incorporating redundant key video signals in the interleaving/scrambling process, thus offering biased protection towards critical video signals. This proposal will be the subject of our next project.

REFERENCES

3 An efficient and suitable decoder is another requirement and this assumed decoder should be able to decode packets (ADU’s), base or enhancement, as soon as they arrive.

[1] S. B. M. Edwards, L. A. Giuliano and B. R. Wright. Interdomain Multicast Routing: Practical Juniper Networks and Cisco Systems Solutions. Addison Wesley Professional, April 2002.

[2] Stephen E. Deering. Multicast Routing in a Datagram Internetwork. PhD thesis, Stanford University, December 1991.

[3] Y. Wang and Q-F Zhu. Error Control and Concealment for Video Communication: A Review. In Proceedings of the IEEE, vol. 86, no. 5, May 1998.

[4] S. McCanne. Scalable Compression and Transmission of Internet Multicast Video. PhD thesis, University of California, Berkeley, December 1996.

[5] L.Rizzo, L.Vicisano and J.Crowcroft. The RLC multicast congestion control algorithm. IEEE Network- Special Issue on Multicast, 1999.

[6] L.Rizzo, L.Vicisano and J.Crowcroft. The RLC multicast T.Turletti, S.F.Parisis and J-C Bolot. Experiments with a Layered Transmission Scheme over the Internet. Technical Report # 3296, INRIA Sophia Antipolis, France, 1997.

[7] S. Iren. Network-conscious image compression. PhD Dissertation, CIS Dept., University of Delaware, 1999.

[8] M. Yajnik, J. Kurose and D. Towsley. Packet Loss Correlation in the MBone Multicast Network. IEEE Global Internet Mini-conference, part of GLOBECOM’96), pp.94-99, London, England, November 1996

[9] D. Clark and D. Tennenhouse. Architectural considerations for a new generation of protocols. In Proceedings of SIGCOMM ’90, pp.201-208, Philadelphia, PA, September 1990.

[10] M. Hosseini, D.T. Ahmed, S. Shirmohammadi, N.D. Georganas. A Survey of Application-Layer Multicast Protocols. IEEE Communications Surveys & Tutorials, vol. 9, pp.58-74, September 2007.

[11] A. S. Tom, C. L. Yeh and F. Chu. Packet video for cell loss protection using deinterleaving and scrambling. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, pp. 2857-2860, Toronto, Canada, May 1991.

[12] Q.-F. Zhu, Y. Wang and L. Shaw. Coding and cell loss recovery for DCT-based packet video. IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, pp. 248-258, June 1993.

[13] S. K. Chin, “Analysis of a Method for Improving Perceptual Quality of Video in a Multicast Network”, published in Proceedings of the 2005 IEEE Sarnoff Symposium, pp. 215-218, April 18-19, 2005, Princeton, New Jersey.

[14] K. Andreev, B. M. Maggs, A. Meyerson, R. K. Sitaraman. Designing Overlay Multicast Networks For Streaming. In Proceedings of the ACM Symposium on Parallel Algorithms and Architectures, June 2003.

[15] S. Banerjee, B. Bhattacharjee, C. Kommareddy, “Scalable application layer multicast”, in ACM SIGCOMM Computer Communication Review, Oct. 2002, pp. 205-217

[16] D. A. Tran, K. A. Hua, and T. T. Do, "A peer-to-peer architecture for media streaming," IEEE J. Sel. Areas Commun., vol. 22, no. 1, pp. 121-133, Jan. 2004.

[17] S. Y. Shi, J. S. Turner, "Routing in Overlay Multicast Networks," IEEE INFOCOM, vol. 3, pp. 1200-1208, New York, NY, USA, Jun. 2002.

83