9
Compression/Decompression Standardization Proposal Aug 24 th , 2010

Compression/Decompression Standardization Proposal Aug 24 th, 2010

Embed Size (px)

Citation preview

Page 1: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Compression/Decompression Standardization Proposal

Aug 24th, 2010

Page 2: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 2

Agenda

I. TIA Relevance

II. Current IETF Standards

III. Limitations of the Current Standards

IV. Methods to overcome current Limitations

V. TIA Proposal

Page 3: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 3

TIA Relevance

Why is Compression/Decompression relevant to TIA?

a) TIA should lead the effort to Compress (Green initiative) and Encrypt (Safety) any Data prior to its transmission on any Network. Compression improves 30-60% of bandwidth utilization and reduces power consumption by 20-30%

b) The existing Standards are more suitable for SW realization, which is inefficient and expensive

c) This standardization could eventually be rolled out to other relevant industry sectors, such as Wireless etc

Page 4: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 4

Current Standards Definition

1. RFC1951 – DEFLATE Compressed Data format specificationa. Specification defines a lossless compressed data format that compresses data using a combination of the LZ77 algorithm and Huffman coding

2. RFC1950 – Zlib Header/Trailer format specificationa. Standard builds upon the DEFLATE standard and adds header and trailer information to the compressed data.

b. Defines the ADLER-32 checksum, used for detection of data corruption

3. RFC1952 – Gzip Header/Trailer format specification a. Standard builds upon the DEFLATE standard and adds header and trailer

information to the compressed data. b. Includes a cyclic redundancy check value for detecting data corruption and

also a length field

Page 5: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 5

Issues with Current Standards

1. Inability to concurrently process multiple streamsa) The current standards do not support concurrent processing and

switching between multiple streams

2. Inability to process variable sized streamsa) Most of the (hardware) implementations do not support processing

streams of varying sizes. A fixed block compression is implemented, which processes the stream as an individual file.

b) Consequently, compression would be reduced as history is not shared across individual files.

3. Header differences between GZIP/ZLIB formatsa) Current implementations are not header/trailer format agnostic.b) Consequently, can support either gzip or zlib formats.

Page 6: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 6

Issues with Current Standards (Cont…)

4. Inefficient usage of computational resources to select the best possible compressed block – SHT/DHT/Stored modea) The CPU performs all three modes of compression and then selects

the best possible mode, thereby consuming relatively high CPU cycles.

5. Higher latency due to generation of Huffman tablea) The DEFLATE standard achieves compression using standard string

search and replace technique, and also performs encoding using Huffman tables to realize higher compression

b) However, the generation of Huffman tables increase the latency

6. Lack of guaranteed fixed-throughputa) The throughput of the compression block is dependent on the

characteristics of the datab) Not desirable for Networking applications

Page 7: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 7

CebaTech’s Proposal

1. Concurrent processing of multiple streamsa) In networking, there could be cases when streams are divided into

packets, and packets of different streams could be sent out of order to the compressing block.

b) In such cases, the compression unit has to have the capability to concurrently process multiple streams by switching contexts between different streams.

c) To achieve the above, the compression unit would have to save the contexts of a particular stream and revert back to the context when another packet of the same stream arrives.

2. Process variable sized streamsa) Valid history data should be shared across contiguous blocks/packets

of a stream to achieve maximum compression ratio possible on that stream

Page 8: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 8

CebaTech’s Proposal (Cont…)

3. Support for GZIP/Zlib formatsa) Implementation should be able to (user-controlled) switch between

the two formats for different streams.

4. Support all modes of compression (SHT/DHT/Stored modes)a) Instead of performing all three modes for every packet, the mode

selection should be controlled by the user, through a command input to the compressing block.

5. Compression standard for low latency compressiona) Selection of appropriate header/trailer formatb) Selection of a non-DEFLATE compression algorithm to reduce latencyc) Should still be able to handle varying file sizes

6. Fixed throughput Compression for networkinga) Maintaining line rate in is imperative for Networking applications

Page 9: Compression/Decompression Standardization Proposal Aug 24 th, 2010

Company Overview 9

Proposal to TIA

TIA creates a new set of committee -“TR-XX”

a) No conflict with IETF RFC 1950, 1951 and 1952. The new standards would be an extension that would facilitate standardized HW implementation of existing standards

b) Exclusive for Compression/Decompression implementations in the Networking domain

c) The same extensions could be additionally adopted by Storage industry as well

d) As a next step, the new TR-XX standards could be modified to include Compression & Encryption as a single-pass multi-transform for HW implementations