41
INF570 dario.rossi Video streaming: P2PTV & YouTube INF570 v09/2012 Dario Rossi http://www.enst.fr/~drossi

INF570 dario.rossi Video streaming: P2PTV & YouTube INF570 v 09/2012 Dario Rossi drossi

Embed Size (px)

Citation preview

INF570

dario.rossi

Video streaming: P2PTV & YouTube

INF570v09/2012

Dario Rossihttp://www.enst.fr/~drossi

Agenda

• P2PTV (live)– Why is P2PTV worth studying ?– State of art & historical perspective– Case study: PeerStreamer– Commercial system in the wild Internet

• YouTube (VoD)– Why is P2PTV not worth studying ?– How does it works– Flow control, system issues, etc.

Why is P2PTV worth studying ?

• The case for P2PTV – Video is growing fast on the Internet– All forms of video (TV, VoD, Internet, and P2P)

will account for over 90% of traffic by 2013 (Cisco)

• Video is already making troubles– BBC and ISP clashes over iPlayer http://

news.bbc.co.uk/2/hi/technology/7336940.stm– The great Obama traffic flood

http://asert.arbornetworks.com/2009/01/the-great-obama-traffic-flood/

BBC and ISP clashes over iPlayer

Why is P2PTV worth studying ?

The great Obama traffic flood

“The Obama inauguration marks a historic day in US politics and a remarkable day for the popularity of Internet streaming video. We look forward to watching more great things to come.”

Why is P2PTV worth studying ?

• P2PTV usage growing– About 7% of P2P traffic (2010)– In China, ~250,000 P2P-TV users watching the same channel

at the same time in 2006 (PPlive)– P2PTV still not very popular in EU, though…

• Server-based paradigms costs money…– For 250,000 users watching 400Kbps video, 100Gbps

bandwidth are needed, about 1/3 of Akamai bandwidth in the same year!

• …P2P allows savings

Why is P2PTV worth studying ?

P2P-TV still not very popular in EU, though…

Why is P2PTV worth studying ?

Our focus

• Focus on P2PTV, not IPTV– Application-layer multicast, not IP multicast

• Focus on Live-streaming, not VoD– Live streaming is more challanging

• Focus on data transmission, not data encoding– Codec-awareness may bring further benefits

• P2PTV in academia vs real world

P2PTV in academia• P2PTV evolution in academia

– State of art follows roughly chronological order – High-level perspective only, no specific system overview– Some systems are actually implemented and used– Active research trend (in P2P-related venues)

• Two main approaches, identified by the overlay topology– Tree

• Single-tree (NARADA, NICE, ZIGZAG, Peercast, ESM…) • Multiple-trees (CoopNet, SplitStream, P2PCast, PROMISE…)

– Mesh• Push vs Pull (CoolStreaming, PeerStreamer, PULSE, PRIME…)

• Overview of pros and cons– Overlay construction complexity– Overlay management and resilience– Video diffusion performance

• Details of one system – Further aspects in the peer-instruction reading at today lab

Single-Tree

Video– Video stream pushed from the source

Topology– Tree routed at the video source– In-order delivery from parent to children

Systems– NARADA, NICE, ZIGZAG, Peercast, ESM…

31 2

12

1 23

1 2 3 4

Single-Tree

Overlay construction Two important params

Tree depth Fan out

…and their implications the deeper the tree,

the higher the delay to shorten the delay,

increase the fan out maximum fan out limited

by the upload capacity

Single-TreePros Simplicity

Cons Maintenance

Very low resilience face to peer departure

Tree cannot recover fast enough in high churn (e.g., think about zapping)

Performance Subtree limited by

bottleneck bandwidth Inefficient: leaf nodes

bandwidth not used

Multiple-TreesVideo– Multiple sub-streams (e.g.,

Multiple Description Coding, Layered Coding, etc.)

Topology– Multiple independent sub-trees– Each sub-tree distributes a

different sub-stream: in-order delivery within each sub-tree

– Peers have different positions in different sub-trees: parent in a single sub-tree, and leaf in the remaining sub-trees

Systems– CoopNet, SplitStream, P2PCast,

PROMISE…

note: peers are represented twice

Multiple-TreesVideo– Multiple sub-streams (e.g.,

Multiple Description Coding, Layered Coding, etc.)

Topology– Multiple independent sub-trees– Each sub-tree distributes a

different sub-stream: in-order delivery within each sub-tree

– Peers have different positions in different sub-trees: parent in a single sub-tree, and leaf in the remaining sub-trees

Systems– CoopNet, SplitStream, P2PCast,

PROMISE…

Multiple-TreesPros– Performance

• Exploit leaf peers bandwidth– Resilience

• Peer departure cause the loss of a single video descriptor

Cons– Performance

• Inherit bottleneck limitation from parent to children

– Maintenance• Higher complexity to maintain the

structure under churn – Video

• Higher overhead, larger transfer size

MeshesVideo– Split in chunks, each carrying a short

video durationTopology– Inspired by file-sharing swarming

techniques (e.g., BitTorrent)– No clear parent-child relationship– Mesh topology evolves over time,

based on peer performance and chunk availability

– Each chunk is distributed on the overlay following a different (or “istantaneous”) tree…

– …so chunks arrive out of sequence (as opposite to streams) at rx!

– Peer buffer the last few chunks (stored in a sliding window) that they (may) exchange over the mesh

1 2 3 4

2 31 3 4 1

212

12

MeshesTwo major flavors– Push

• A peer actively decides which chunk to send to its neighbors

• Peers blindly pushing chunks may send redundant chunks

– Pull• Peer asks other peers for

chunks they miss• Peers blindly pulling chunks

may request unavailable chunks

• Buffer map– Periodical exchange of available

chunks list to avoid resource waste– In push, collision may still yield

redundant data transfer– Pull introduces additional delay

1

2

1 2 3 4

2 31 3 4 1

212

12

x

x

11

MeshesOther elements– Peer selection

• How to select best neighbors, irrespectively of their content ?

• E.g., active bandwidth, latency measurement (for performance), Autonomous System (for ISP friendliness), etc.

– Chunk scheduling• Which chunk to select? • E.g., closest to playout deadline for own

interest and QoS, latest in window to extend system lifetime…

Pros and cons– High resilience, robustness to churn,

easy maintenance,can use encoding– Delicate tradeoffs, no performance

guarantee, but work well in practice

1 2 3 4

2 31 3 4 1

212

12

PeerStreamer

Commercial systems

• Joost became web-based

? Closed and proprietary systems...

Several popular applications PPLive, SopCast, TVAnts,

PPStream, TVUplayer, Zattoo, ... Typically mesh-based

With rather different design decisions Totally unknown inner-working…

Do they choose nearby or faraway peers ? Are they ISP-friendly, confining traffic within

AS boundaries? Temporal dynamics of mesh evolution ? Buffer maps exchange and scheduling policy ? Congestion control? Loss recovery? QoS !?

…which are definitively worth investigating!

P2P-TV in practice

The traffic from the network point of view

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Measuring the P2P-TV traffic

Characterizing the P2P-TV traffic

Modeling the P2P-TV traffic

What can the ISP do?

Measurements of P2P-TV traffic

• Active measurements– Intervention to the network traffic– Generation and injection of the traffic with a lot of machines– Artificial scenarios– Works according to our purpose

• Passive measurements– Observation of the network traffic– Only some probe points– Natural behaviors– Works according to the users

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Passive vs active measurements

• Active testbed (like in skype)• Passive

– 3 nationwide telecom operators:• Hungarian Telekom• Polish Telecom• France Telecom

ISP No. and type of measurement points

No. of users Link rate

MT 1 at Edge (BRAS) 500 – 1500 1 GbpsMT At peering points 600 K – 700 K 1 – 10 GbpsMT At MPLS Edge router 600 K – 700 K 1 – 10 GbpsFT 9 at Edge (BRAS) 1000 – 5000 1 – 10 GbpsTP 1 at Edge (BRAS) 9000 – 10000 1 – 10 Gbps

Limiting downstream capacity

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Limiting downstream capacity

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Impair even IP peers only (red)

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

New element: temporal and spatial diversity100peers

250peers

500peers

15000peers

In more details

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Same volumeas YouTube

Less than 1% of cust.

Internal peerscannotcontribute tothe contentdiffusion Up:Down=1:5

In more details

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

FTTH customersThey UPLOAD5 times what theyDownloadUp:Down=5:1

Upload capacity of contributing peers

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Large number ofhigh speed peersThose allow the system to work

Huge amount of lowspeed peers

Where is traffic coming from?

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Where is traffic coming from?

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

•Content is downloaded preferentially from peers in the same AS.•Those do not have enough upload capacity•So traffic is coming from HIGH BANDWIDTH peers

Lesson learned

• We enforced critical network conditions to characterize the traffic of four applications.– Not really network friendly

• GLOBAL

• PER-PEER

•Downstream limitation: Create congestion•Loss: Recover Loss•Delay: suffer delay larger than 500ms

•Selects “good” peers•Implement per-peer preference

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

Lesson learned• Application are complex to understand

• Peer discovery

• Contentexchange

•Continous process•Random peer selection•No apparent locality properties

•Strongly driven by upload capacity•Some AS localization

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

•Temporal variability•Spatial diversity

Lesson learned

• User behavoir + peer capacity are constraints

• Similarusers

• Peeruploadcapacity

•Are interested in the same content•Live in the same area•Help localize traffic naturally

•Lot of low bandwidth peers•Fewer (but large) high capacity peers•Traffic must be downloaded fromASes that have high capacity peers

NAPA-WINE Final Workshop 2011Torino, 20-21 January 2011

YouTube

• Client behavior• Server behavior

Summary

• Academia• Commercial systems• Youtube

References

?? || //