Upload
nishanthsastry
View
62
Download
3
Embed Size (px)
Citation preview
Nishanth Sastry
King’s College London
Saving the Internet from On-demand Video Streaming
Content
Plan for the talk
• Content Delivery: a brief introduction
• Analysis of drawbacks of current systems
• The CD-GAIN take on content dlivery
http://bit.ly/cd-gain
Some statistics
• Cisco Visual Networking Index 2013-18:
– By 2018, videos will be 79% of all traffic
*Currently they comprise 66% of all traffic
• Netflix alone is 33% of peak time US traffic
• 44% of UK households using BBC iPlayer
Introducing BBC iPlayer
• 9 months: May 2013 – Jan 2014
• 1.9 Billion sessions
• In one representative month:– 32 Million users/devices
– 20 Million IP addresses
• In London alone, in one month:– 1.26 Million IPs
– 2.15 Million users/devices
How do Netflix/Youtube/iPlayer
scale?
?
If Internet connection is bottleneck,
bypass it by replicating!
Multihoming
Content Delivery Network
If Internet connection is bottleneck, bypass it by replicating!
Replication has fundamentally
changed the Internet’s structure
2007
Page 14 - Labovitz SIGCOMM 2010
Traditional Internet Model
Replication has fundamentally
changed the Internet’s structure
C. Labovitz et al., Internet Inter-domain Traffic.
Proc. SIGCOMM 2010
2009
Page 15 - Labovitz SIGCOMM 2010
A New Internet Model
Flatter and much more densely interconnected Internet
Disintermediation between content and “eyeball” networks
New commercial models between content, consumer and transit
Settlement Free
Pay for BW
Pay for access BW
Problem solved! But is solution right?
1. No longer an “Internet” of connected nets
– Have hyper-giants become “too big to fail”?
Problem solved! But is solution right?
2. Distributed systems hard to engineer:
– Consistency
– Failover
– …
Global scale distributed systems extremely hard!
3. Global replica infrastructure is expensive
– Content providers need to pay hyper-giants
– Or… be hyper-giants themselves
Problem solved! But is solution right?
4. Even CDNs don’t cover all over the globe:
performance and cost diverge by region
HH Liu et al., Optimizing cost and performance for content multihoming. SIGCOMM 2012
5. Misses opportunities for local sharing!
Problem solved! But is solution right?
Taking stock with
TV ContentHow did we consume content before?
How do we consume content now?
What can we learn from what we see?
How did we watch TV before?
http://www.watfordobserver.co.uk/nostalgia/memories/10099510.Coronation_treat_as_community_gathers_around_the_only_TV/
Today, TV is just another “app”
What changed: Push Pull
Superficially: audience to TV set ratio has decreased
At a fundamental level:
audience per “broadcast” is lower
“Broadcast” time is chosen by the consumer
Traditional mass media pushed content to consumer
Current dominant model has changed to pull
But people have not changed!
New Directions for
Content Delivery
1. Select few items become globally popular
Can we exploit redundancy using P2P?
2. …but individual users may have favourites
Can we predict user quirks/favourites and personalise content delivery?
3. What if we could in fact change users?
Can we “nudge” user behaviour and make content delivery cheaper for all?
1. Can we exploit
redundancy with peer-
assistance?P2P works at scale for Long Duration content such as TV
under “online while you watch” model.
P2P-assisted content delivery:
Looks good, but details important!
Simple model – augmenting traditional delivery:
Server-based content delivery as mainstay
Shift seamlessly to P2P as more users join
Peer availability offloads traffic from provider!
? Will there be enough peers in swarms?
• Peer arrivals may be asynchronous
• Peers may not participate in uploads
? Can P2P swarms be ISP-friendly & local?
• …and still work well?
Swarm fragmentation Factors
• ISP friendliness
• Bitrate stratification
• Partial participation
• Limited upload bandwidth
Taffic offloading gain as a function of
peer availability (swarm capacity, c)Model swarms as infinite-server queue (extending Menasche et al,. CoNEXT 2009)
• Server load increases with no. of users
• … until swarm has one user on average
• Subsequent increase in load decreasesserver traffic as swarm takes over!
Let’s test on real data from London
Gains in swarms fragmented by
ISP-locality & Bit-rate stratification
Why fragmentation does no harm?
Top 8 ISPs = 70% traffic Top 2 bitrates=70% sessions
“Online while you watch” model
critical for ensuring availability
ISP-friendly P2P is also greener
because of fewer hops to replica!
Carbon savings of P2P over CDN
for one ISP’s topology
2. Can we personalise
content delivery?Users are highly predictable.
Simple analytics can offload traffic and
decrease carbon footprint
Why iPlayer, not DVRs?
• DVRs have >50% penetration in US, UK
• Many (e.g. YouView) don’t need cable
• Could also use TV tuner and record on laptop
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
Because, people don’t remember to record!
Can we help users record
what they want to watch?
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
Speculative Content Offloading and
Recording Engine
Caching at-the-very-edge completely offloads traffic!
Which features to use?-I
• BBC proposes, consumer disposes!
• Serials:~50% of content corpus;
80% of watched content!
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
Which features to use?-II
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
Which features to use?-III
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
SCORE=predictor+optimiser
• Predict using user affinity for
• Serials: Episodes of same programme
• Favourite genres
• We can optimise for decreasing traffic or carbon footprint
• Decreasing carbon decreases traffic, but not vice versa
• Turns out we only take 5-15% hit by focusing on carbon
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
Performance evaluation
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
Compare SCORE wrt. Oracle knowing future requests
Oracle saves:
• Up to 97% of traffic
• Up to 74% of energy
• Savings relatively insensitive to choice
of energy model parameters
• SCORE: ~40-60% of Oracle savings
energy than traffic opt.
Not all of these savings come from
predicting popular content
• Indiscriminately recording top n shows can lead to
negative energy savings!
• Personalised approach necessary, despite
popularity of “prime time” content
Understanding and decreasing the Network Footprint of Catch-up TV-WWW’13
3. ‘Nudging’ user
behaviourDecrease content delivery costs by
asking user to “go easy” on
infrastructure
What is ‘nudging’?
Current mindset: User is king
Operators/providers attempt to
satisfy all user accesses
Idea: ‘Nudge’ user to behaviours
better suited to network!
Passive nudgingGive users flexibility to choose: on-demand!
Active nudgingTime-shift users’ access pattern
E.g., lower price for off-peak access
Space-shift users’ accesses to different ISP
E.g., move smartphones from 3G Wifi
(Applying SCORE to smart phones)
Content-shifting: suggest alternate items for
users to watch, based on cache contents!
Digital Media Convergence:
Remember the hype?
Good News: it has happened
CD-GAIN: New directions for a
Content-centric Internet
1. Can we exploit redundancy using P2P?
– YES, but “online while you watch” is critical
2. Can we predict user quirks/favourites and personalise content delivery?
– YES, Speculative Content Offloading and Recording Engine (SCORE)
3. Can we “nudge” user behaviour and make content delivery cheaper for all?
Guiding principles
1. Cache as close to user as possible
2. Increase cache reuse by any means!
3. Decrease peak usage: infrastructure
can be provisioned for smaller load
• Can increase average use
(speculative traffic is fine!)
Saving the Internet from On-demand Video Streaming
Content
http://bit.ly/c
d-gainNishanth Sastry
King’s College London
Joint work with:Mustafa Al-Bassam, King’s College London
Jigna Chandaria, BBC R&D
Jon Crowcroft, U. Cambridge
Nick Feamster, Georgia Tech
Dmytro Karamshuk, King’s College London
Richard Mortier, Uni. Nottingham
Gianfranco Nencioni, Uni. Pisa
Andy Secker, BBC R&D
Gareth Tyson, Queen Mary London
Funding support from UK EPSRC