View
215
Download
0
Category
Tags:
Preview:
Citation preview
Segment-Based Proxy Caching of Multimedia Streams
Authors:
Kun-Lung Wu, Philip S. Yu, and Joel L. Wolf
IBM T.J. Watson Research Center
Proceedings of The Tenth International World Wide Web Conference, May 1-5, 2001, Hong Kong.
Contents
Quick overview of multimedia caching Segment-based Caching technique Segmentation scheme Cache admission policy Cache replacement policy Performance evaluation Simulation results Conclusion
Multimedia caching overview
Characteristics of multimedia caching:– Real-time data– Size of object
Some existing techniques– Proxy, Prefix caching– Broadcasting, Batching, Patching, Chaining,
CMP, Layered-caching, MiddleMan, etc.
Goal of this paper
Propose a new caching technique in which media object is divided into variable size segments. Segments at the beginning of video are given higher priority in cache.
How did they come up with this technique: – Importance of beginning portions of video
objects.– Belief that most object should be cache
partially.
Segmentation scheme
A media file is divided into equal-sized blocks. Blocks of a media object are grouped into
variable-sized, distance-sensitive segments. Segment size increases exponentially from the
beginning segment: Segment 0 and Segment 1 always have 1 block. Segment 2 has 2, Segment 3 has 4,.... Segment i has 2^(i-1) blocks.
Segmentation scheme (cont.)
Number of segment cached is dynamically determined by cache admission & replacement policies.
Why dividing like this? – quickly discard a big chunk of data in single action,
may cache initial segments only and give them higher priority.
Disadvantage: download the later portions may not be fast.
Cache admission policy
Only cache segments of "popular enough" media objects.
Apply different criteria for different segments of the same media object: – one criteria is the distance of the segment (not block)
from the beginning of the media objects.
– Other criteria are not mentioned?
Has a threshold number Kmin, if segment number is less than Kmin always cached.
Cache admission policy (cont.)
Kmin is determined by enough of the non-cached segments can be fetched in time from the content server so that continuous streaming can be guaranteed once it is started, this depends on network delay between proxy server, load condition of the content server.
Kmin is different for each object. Assume cache space is divided into 2 parts:
– space for caching initial segments– space for caching later segments
Cache replacement policy
Caching value of a segment = reference frequency / segment distance => favor initial segments of popular object.
Reference frequency = 1 / (current time - last time requested). Two Lease-Recently-Used (LRU) stacks are maintained: one
for initial segments, one for later segments. Maintain caching contiguous segments, segment i is cached
only if i-1 is cached. When an object is requested for the first time, the initial Kmin
segments are eligible for caching. Later segments are not cached since reference frequency is zero. Later segments may be cached on subsequence requests.
Performance evaluation setup
Use event-driven simulator. Cache space is divided into 2 stacks: 10% for initial segments,
90% for later segments Video size if uniformly distributed from 0.5B to 1.5B blocks,
B = 2000. Playback time for a block is 1.8 sec. => playback time for
video is 30-90 min. Cache size = 400000 blocks => on average cache 200 movies. There are 2000 movies Request inter-arrival time is exponentially distributed with
mean lambda = 60.0
Performance evaluation setup (cont.)
Video popularity follows Zipf distribution with skew factor = 0.2 . Changed every 200 requests.
No topology information is included: #server, # of clients, # of proxy…
Compare with prefix/suffix and full video cache scheme. For full video scheme, LRU is used. For prefix/suffix, cache space size assigned to prefix is equal the size assigned to Kmin.
Measure byte hit ratio and percentage of request with delayed starts.
Simulation results
Impact of cache size (cache size varies from 150 to 450 video capacity)
Impact of skew: by hit ratio not so big if request is really skewed.
Video length, total number of video, percentage of cache for initial segment, user viewing behavior.
Conclusion
Introduce a new caching technique in which video objects are divided into non-equal segments: propose a division mechanism, cache admission, and replacement policies
Carry out a comparative performance study of new technique vs. proxy prefix caching and normal whole object caching.
Results show improvements of new technique in byte hit ratio and delay start.
Problems
Not mention topology, network load saving, calculating Kmin.
Cache duplication: proxy server in same network or subnet may cache the same thing.
Segmentation scheme: may study different schemes other than keep double the size of segments.
Discussion
Recommended