Piotr Srebrny
1
Problem statementPacket cachingThesis claimsContributionsRelated worksCritical review of claimsConclusionsFuture work
2
Internet is a content distribution network P2P, file hosting, and streaming account for
more than 80% of the Internet traffic (“Internet study”, Ipoque, 2009)
Single source multiple destination transport mechanism is fundamental
At present, Internet does not provide efficient multi-point transport mechanism
3
Server transmitting the same data to multiple destinations is wasting the Internet resources The same data traverses the same path
multiple times
D
C
S
A
BP D P C P B P A
4
The goal of this work is to
remove the Internet redundancy in a minimal invasive way
5
“Datagram routing for internet multicasting”, L. Aguilar, 1984 – explicit list of destinations in the IP header
“Host groups: A multicast extension for datagram internetworks”, D. Cheriton and S. Deering, 1985 – destination address denotes a group of host
“A case for end system multicast”, Y. hua Chu et al., 2000 – application layer multicast
6
Consider two packets A and B that carry the same content and travel the same few hops
P AA
BP P P
7
Consider two packets A and B that carry the same content and travel the same few hops
P BA
BP P P
B P B
8
I. Packet Caching system can achieve near multicast bandwidth savings
II. Packet Caching system requires server support
III. Packet Caching system is incrementally deployable
IV. Packet Caching system preserves fairness in Internet
9
Principles
Feasibility
Environmental impact
10
I. Contribution
11
Network elements: Link
Medium transporting packets Very deterministic Throughput limited in bits per second
Router Switches data packets between links Very unpredictable Throughput limited in packets per second
12
(I) Cache payloads on links
Caching is done on per link basis Cache Management Unit (CMU) removes
payloads that are stored on the link exit Cache Store Unit (CSU) restores payloads
from a local cache
13
Link cache processing must be simple ~72ns to process a minimum size packet on a
10Gbps link Modern memory r/w cycle ~6-20ns
Link cache size must be minimised At present, a link queue is scaled to 250ms of
the link traffic Difficult to build!
14
(II) A source of redundant data must support caching!
Server can transmit packets carrying the same data within a minimum time interval
Server can mark its redundant traffic
Server can provide additional information on packet content
15
Two components of the CacheCast system Server support Distributed infrastructure of small link caches
D
C
S
A
BD C B P A
CMU CSU CMU CSU
16
Packet train Only the first packet carries the payload The remaining packets truncated to the
header
18
Packet train duration time
It is sufficient to hold payload in the CSU for the packet train duration time
What is the maximum packet train duration time?
19
Back-of-the-envelope calculations
~10ms caches are sufficient
20
II. Contribution
21
Two aspects of the CacheCast system
I. Efficiency How much redundancy CacheCast
removes?II.Computational complexity
Can CacheCast be implemented efficiently with the present technology?
22
CacheCast and ‘Perfect multicast’ ‘Perfect multicast’ – delivers data to
multiple destinations without any overhead
CacheCast overheadsI. Unique packet header per destinationII.Finite link cache size resulting in payload
retransmissionsIII.Partial deployment
23
Example
Lm – total amount of multicast linksLu – total amount of unicast links
u
mm L
L1
5,9 mu LL
%449
4
9
51 m
24
CacheCast unicast header part (h) and multicast payload part (p)
Thus:
E.g.: using packets where sp=1436B and sh=64B, CacheCast achieves 96% of the ‘perfect multicast’ efficiency
uph
mpuhCC Lss
LsLs
)(1
u
mm L
L1
p
hmCC s
sr
r
,1
1
25
System efficiency δm for 10ms large caches
26
S
CMU and CSU deployed partially
1 2 3 4 5 6
27
28
Computational complexity may render CacheCast inefficient
Implementations Server support – a Linux system call and
an auxiliary shell command tool
Link cache elements – implemented with Click Modular Router Software as processing elements
30
New system call msend()
msend() compared with the standard send()system call
31
Server transmitting to100 destinations using Loop of send() sys. calls A single msend() sys. call
msend() system call outperforms the standard send() system call when transmitting to multiple destinations
32
Click Modular Router Software CMU and CSU implemented as
processing elementsCacheCast router
33
Due to CSU and CMU elements CacheCast router cannot forward packet trains at line rate
34
When compared with a standard router CacheCast router can forward more data
35
Testbed setup
Clients from machines A and B gradually connect to server S
36
Original paraslash server can only handle 74 clients
CacheCast paraslash server can handle 1020 clients and more depending on the chunk size
Server load is reduced when using large chunks
37
III. Contribution
38
Internet congestion avoidance relies on communicating end-points that adjust transmission rate to the network conditions
CacheCast transparently removes redundancy increasing network capacity
It is not given how congestion control algorithms behave in the CacheCast presence
39
CacheCast implemented in ns-2 Simulation setup:
Bottleneck link topology 100 TCP flows and
100 TFRC flows Link cache operating
on a bottleneck link
40
TCP flows consume the spare capacity TFRC flows increase end-to-end throughput
CacheCast preserves the Internet ‘fairness’
41
J. Santos and D. Wetherall, “Increasing effective link bandwidth by suppressing replicated data,” USENIX’98
A. Anand, A. Gupta, A. Akella, S. Seshan, and S. Shenker, “Packet caches on routers: the implications of universal redundant traffic elimination,” SIGCOMM’08
A. Anand, V. Sekar, and A. Akella, “SmartRe: an architecture for coordinated network-wide redundancy elimination,” SIGCOMM’09
42
I. Packet caching system can achieve near multicast bandwidth savings
II. Packet caching system requires server support
III. Packet caching system is incrementally deployable
IV. Packet caching system preserves fairness in Internet
43
I. Packet caching system can achieve near multicast bandwidth savings
II. Packet caching system requires server support
III. Packet caching system is incrementally deployable
IV. Packet caching system preserves fairness in Internet
44
I. Packet caching system can achieve near multicast bandwidth savings
II. Packet caching system requires server support
III. Packet caching system is incrementally deployable
IV. Packet caching system preserves fairness in Internet
45
I. Packet caching system can achieve near multicast bandwidth savings
II. Packet caching system requires server support
III. Packet caching system is incrementally deployable
IV.Packet caching system preserves fairness in Internet
46
Getting CacheCast into the real world
Server support
Link cache
II and III generations of router
47
48