1 Chapter 4: Compression (Part 2) Image Compression

Preview:

Citation preview

1

Chapter 4: Compression(Part 2)

Image Compression

2

Acknowledgement

Some figures and pictures are taken from:

The Scientist and Engineer's Guide to Digital Signal Processing

by Steven W. Smith

3

Lossy compression

Motivations: Uncompressed images, video and audio data are huge,

e.g., in HDTV, bit rate easily exceeds 1Gbps. Lossless methods (Huffman, Arithmetic, LZW) are

inadequate for images and video because the spatial and/or temporal redundancy of pixel values are not exploited.

Special characteristics of human perception (e.g., more sensitive to low spatial frequencies) should be taken advantage of to achieve a higher compression ratio.

4

Spatial sensitivity

a higher spatial frequencyrequires a larger contrast

5

Vector quantization (VQ)

A general lossy compression technique Scalar quantization: 3,200,134 ~ 3M VQ: a generalization of scalar quantization: subjects to be

quantized are vectors. VQ can be viewed as a form of pattern recognition where an

input pattern (a vector) is approximated by one of a predetermined set of standard patterns.

“Doesn’t quantizationmean round the figure?So how can people getslim with it?” Benny.

“Doesn’t quantizationmean round the figure?So how can people getslim with it?” Benny.

6

Vector quantization (Def’n)

A vector quantizer Q of dimension k and size N is a mapping from a vector in a k-dimensional Euclidean space into a finite set C containing N output or reproduction points, called code vectors.

C: the codebook (with N vectors).

VectorVector QQ Ck

N

7

Vector quantization (Def’n)

The rate of Q is r = (log2N)/k = number of bits per vector component used to represent the input vector.

Two issues: how to match a vector to a code vector (pattern

recognition), how to set the codebook.

8

Searching the codebook

Given a vector, we need to search the codebook (finding an index) for a code vector that gives the minimum distortion.

Squared error distortion:

d x x x x x xi ii

k

( , ) ( )

2 2

0

1

vector to be coded code vector the ith component of vector x

9

Codebook training

Get a large sample of data (the training set). Pick an initial set of code vectors. Partition the training set into cells. Use the cells to tune the codebook. Repeat.

QQ

cell

cell find thecentroid

training set

10

Codebook training

Step 1 Given a training set, X, with M vectors Let d = the mean square distortion measure Let the iteration index be j and set j=1 Select an initial codebook C0

Set initial distortion d0 = infinity Pick a convergence threshold E

11

Codebook training

Step 2 Optimally encode all vector x within X using Cj-1

Assign x to cell Pi,j-1 if x is quantized as yi,j-1 where yi,j-1 is the i-th codevector in Cj-1

Compute dj = sum of all vector distortions

If (dj-1 – dj ) / dj < E then quit with codebook = Cj-

1; otherwise go to step 3.

12

Codebook training

Step 3 Update the codevectors as

yi,j = the average of all the vectors assigned to cell Pi,j-1 (i.e., the centroid).

j++; go to Step 2.

13

Codebook training (illustration)

code set code vector

00 [25,33,40]

01 [13,53,61]

10 [20,88,30]

11 [21,10,24]

vector

1 [25,10,24]

2 [30,30,30]

3 [11,91,11]

4 [28,28,29]

5 [20,81,11]

6 [15,42,52]

7 [24,11,24]

8 [28,29,28]

9 [25,12,25]

10 [10,89,12]

training vectors

codebook

14

Codebook training (illustration)

code set code vector

00 [25,33,40]

01 [13,53,61]

10 [20,88,30]

11 [21,10,24]

vector

1 [25,10,24]

2 [30,30,30]

3 [11,91,11]

4 [28,28,29]

5 [20,81,11]

6 [15,42,52]

7 [24,11,24]

8 [28,29,28]

9 [25,12,25]

10 [10,89,12]

training vectors

codebook

d([25,10,24],[25,33,40]) = 785d([25,10,24],[13,53,61]) = 3362d([25,10,24],[20,88,30]) = 6145d([25,10,24],[21,10,24]) = 16

15

Codebook training (illustration)

code set code vector

00 [25,33,40]

01 [13,53,61]

10 [20,88,30]

11 1 [21,10,24]

vector

1 [25,10,24]

2 [30,30,30]

3 [11,91,11]

4 [28,28,29]

5 [20,81,11]

6 [15,42,52]

7 [24,11,24]

8 [28,29,28]

9 [25,12,25]

10 [10,89,12]

training vectors

codebook

d([25,10,24],[25,33,40]) = 785d([25,10,24],[13,53,61]) = 3362d([25,10,24],[20,88,30]) = 6145d([25,10,24],[21,10,24]) = 16

16

Codebook training (illustration)

code set code vector

00 2,4,8 [25,33,40]

01 6 [13,53,61]

10 3,5,10 [20,88,30]

11 1,7,9 [21,10,24]

vector

1 [25,10,24]

2 [30,30,30]

3 [11,91,11]

4 [28,28,29]

5 [20,81,11]

6 [15,42,52]

7 [24,11,24]

8 [28,29,28]

9 [25,12,25]

10 [10,89,12]

training vectors

codebook

17

Codebook training (illustration)

code set code vector

00 2,4,8 [25,33,40]

01 6 [13,53,61]

10 3,5,10 [20,88,30]

11 1,7,9 [21,10,24]

vector

1 [25,10,24]

2 [30,30,30]

3 [11,91,11]

4 [28,28,29]

5 [20,81,11]

6 [15,42,52]

7 [24,11,24]

8 [28,29,28]

9 [25,12,25]

10 [10,89,12]

training vectors

codebook

[30,30,30]+[28,28,29]+[28,29,28]

3= [28,29,29]

18

Codebook training (illustration)

code set code vector

00 2,4,8 [28,29,29]

01 6 [15,42,52]

10 3,5,10 [13,87,11]

11 1,7,9 [24,11,24]

vector

1 [25,10,24]

2 [30,30,30]

3 [11,91,11]

4 [28,28,29]

5 [20,81,11]

6 [15,42,52]

7 [24,11,24]

8 [28,29,28]

9 [25,12,25]

10 [10,89,12]

training vectors

codebook

19

Codebook training (illustration)

vector code vector code

1 [25,10,24] [24,11,24] 11

2 [30,30,30] [28,29,29] 00

3 [11,91,11] [13,87,11] 10

4 [28,28,29] [28,29,29] 00

5 [20,81,11] [13,87,11] 10

6 [15,42,52] [15,42,52] 01

7 [24,11,24] [24,11,24] 11

8 [28,29,28] [28,29,29] 00

9 [25,12,25] [24,11,24] 11

10 [10,89,12] [13,87,11] 10

20

VQ and image compression

A simple way of applying VQ to image compression is to decompose an image into a number of (say) 22 blocks. Each block then derives a 4-element vector.

Instead of encoding the pixel values of a block, one trains a code book and encodes a block by an index into the code book.

To train a code book, a number of images of similar nature are used e.g., facial images are used to train a code book for

compressing facial images

[154,154,154,147]

[175,182,168,154]

[189,168,168,168]

[217,175,196,175]

[175,154,175,168]

[203,175,168,168]

22

Image & video compression

JPEG: spatial redundancy removal in intra-frame coding.

H.261 and MPEG: both spatial and temporal redundancy removal in intra-frame and inter-frame coding.

23

Sub-sampling techniques

Sub-sample to compress. Interpolation techniques are used upon reconstruction of the original data.

Sub-sampling results in information loss. However, the loss is acceptable by the virtue of the physiological characteristics of human eyes.

Chromatic sub-sampling: Human eye is more sensitive to changes in brightness than to color changes. Very often, RGB values are transformed to Y’CBCR values. The chroma components are then sub-sampled to reduce the data requirement.

24

Chromatic sub-sampling

4:2:2 sub-sample color signals horizontally by a factor of 2 (CCIR 601 standard).

4:1:1 sub-sample horizontally by a factor of 4. 4:2:0 sub-sample in both dimensions by a factor of

2. 4:2:0 is often used in JPEG and MPEG.

25

Chromatic sub-sampling(notation)

4:2:2

luma horizontalsampling reference

chroma horizontalsampling

either same asthe 2nd digit; or0, indicating thatCB and CR areverticallysub-sampled at afactor of 2.

Chroma format

pixels / line Y

lines / frame Y

pixels / line Cb, Cr

lines / frame Cb, Cr

H sub-sampling factor

V sub-sampling factor

4:4:4 720 480 720 480 none none

4:2:2 720 480 360 480 2:1 none

4:2:0 720 480 360 240 2:1 2:1

4:1:1 720 480 180 480 4:1 none

4:1:0 720 480 180 120 4:1 4:1

Example: a frame with pixel dimensions of 720 480:

27

JPEG compression

JPEG stands for “Joint Photographic Experts Group”. JPEG is commonly used to refer to a standard for

compressing and encoding continuous-tone still images. adjustable compression/quality 4 modes of operations:

Sequential (line-by-line) (baseline implementation) Progressive (blur-to-clear) Lossless (pixel-for-pixel) Hierarchical (multiple resolutions)

28

JPEG (steps)

1. Preparation includes analog-to-digital conversion. Image can be separated

in Y’CBCR components to facilitate sub-sampling on the chrominance components. The image is segmented into 88 blocks.

2. Processing sophisticated algorithms, such as transformation from time to

frequency domain using DCT.

Uncom-pressedpicture

Compressed

picture

picturepreparation

pictureprocessing

quantization entropyencoding

1/21/4

1/41/21/2

30

JPEG (steps)

3. Quantization map real-number values from the previous step to

integers. This process results in loss of precision, but achieves data compression.

It specifies the granularity of the mapping, allowing control of the precision carried in the compressed data.

Different levels of quantization are applied to the luminance and chrominance components, exploiting the sensitivity of human perception.

31

JPEG (steps)

4. Entropy encoding It compresses a sequential data stream without

loss. Steps of zigzag scan to linearize the data. Predictive encoding and RLE are used to encode the DC and AC components. Finally, Huffman scheme to encode the data.

32

JPEG (schematic diagram)

Y’CBCR

CB

CR

33

Image preparation

Each image consists of a number of components (e.g., RGB, Y’CBCR).

Divide each component into 8 8 blocks. Each block is a “data unit” subject to DCT

transformation. The values in a block are shifted from unsigned

integers with range [0, 2p-1] to signed integers with range [-2p-1, 2p-1-1]. e.g., in 8-bit mode, the range [0,255] is shifted to [-128,127].

35

DCT (Discrete Cosine Transform)

An 8 8 image block is a 2D function f(x,y) (0 x, y 7) in spatial domain.

231 224 224 217 217 203 189 196

210 217 203 189 203 224 217 224

196 217 210 224 203 203 196 189

210 203 196 203 182 203 182 189

203 224 203 217 196 175 154 140

182 189 168 161 154 126 119 112

175 154 126 105 140 105 119 84

154 98 105 98 105 63 112 84

0 1 2 3 4 5 6 7

x01

23

45

67

y

36

DCT (Discrete Cosine Transform)

We define 64 basis functions for frequency variables u, v (0 u, v 7) in a 2-dimensional space:

e.g.,

f x yx u y v

u v, ( , ) cos( )

cos( )

2 1

16

2 1

16

16

)12(cos),(0,1

xyxf

37

DCT (Discrete Cosine Transform)

These are wave functions of successively increasing frequencies. (Imagine them as undulating surfaces of increasingly frequent ups and downs.)

Given a 2D function (imagine it as a 2D surface), one can decompose it into a linear combination of these wave functions.

So, DCT is a frequency (uv coordinates) representation of a spatial (xy coordinates) function.

A 1-D exampleA 1-D example

u0v0u0v0

u1v1u1v1

u0v1u0v1

u1v0u1v0

u2v2u2v2

u5v1u5v1

Some 2-DBasis

Functions

Some 2-DBasis

Functions

u6v3u6v3

x

y

x

y

Some 2-Dbasis

functions withquantized values

Some 2-Dbasis

functions withquantized values

41

DCT

The 64 (8 8) DCT basis functions (top view) are:

42

DCT coefficients- example -

139 144 149 153 155 155 155 155

144 151 153 156 149 146 156 156

150 155 160 163 158 156 156 156

159 161 162 160 160 159 159 159

159 160 161 162 162 155 155 155

161 161 161 161 160 157 157 157

162 162 161 163 162 157 157 157

162 162 161 161 163 158 158 158

An 8 8 block

233.1 0.3 -9.8 -7.9 2.1 -0.1 -3.7 1.1

-25.5 -15.9 -3.5 -6.4 -2.9 2.1 -0.7 -1.5

-12.3 -8.5 -0.3 0.1 0.2 0.0 -1.1 -0.2

-6.4 -2.3 -0.4 2.2 0.9 -0.6 0.2 0.4

1.9 -2.2 -0.8 4.3 -0.1 -2.5 1.6 1.5

5.2 -2.0 -1.6 3.4 -0.8 -1.0 2.4 -0.6

2.0 -2.1 -3.3 2.1 -0.5 -0.6 2.3 -0.4

-0.6 0.5 -5.6 0.3 1.9 -0.2 0.2 -0.2

DCT coefficientsafter transformation

in x,y co-ordinates in u,v co-ordinates

44

DCT

From the original spatial function f(x,y), extract the frequency components by multiplying f(x,y) with these basis functions.

F u v c cx u y v

f x y

where c cfor u v

otherwise

u vx y

u v

( , ) cos( )

cos( )

( , )

,,

,

1

4

2 1

16

2 1

16

0

1

12

45

DCT

The result is a function F(u,v) in frequency domain, 64 (8 8) coefficients representing the 64 frequency components of the original image function. Of the 64 coefficients, F(0,0) is due to the basis function

of u,v = 0, a flat wave function. F(0,0) is also known as the DC-coefficient.

The other coefficients are called the AC-coefficients.

46

DCT

The DC component determines the fundamental gray (color) intensity of the 8 8 pixels. The AC components add the intensity variation to the pixel values to give the original image function.

Typical image consists of large regions of single intensity and color. DCT thus concentrates most of the signal in the lower spatial frequencies. Many of the high-frequency coefficients are of very low values. Entropy encoding applied to the DCT would normally achieve high data reduction.

47

IDCT

The inverse of DCT (IDCT) takes the 64 DCT coefficients and reconstructs a 64-point output image by summing the basis signals.

The result is a summation of all the frequency components, yielding a reconstruction of the original image. (Imagine adding up the respective undulating surfaces to yield the original surfaces.)

f x y c cx u y v

F u v

where c cfor u v

otherwise

u vu v

u v

( , ) cos( )

cos( )

( , )

,,

,

1

4

2 1

16

2 1

16

0

1

12

for the “eye” block

50

DCT

A 1-D example to illustrate the decomposition and reconstruction.

8 16 24 32 40 48 56 64

100-52 0 -5 0 -2 0 0.4

100-52 0 -5 0 -2 0 0.4

8 15 24 32 40 48 57 63

DCT

truncate

IDCT

51

Quantization

The 64 DCT coefficients are real numbers (i.e., not integers). These coefficients are quantized to throw away bits, and that is the main source of lossiness.

52

Quantization

Uniform quantization DCT coefficients can be divided by a constant N and the

result is rounded. Equal treatment to all DCT coefficients.

Quantization tables Each of the 64 coefficients can be adjusted separately.

Specific frequencies can be given more importance than others according to the characteristics of the original image.

53

Quantization

Quantization tables In JPEG, each F(u,v) is divided by a different quantizer

step size Q(u,v) given in a quantization table:16 11 10 16 24 40 51 61

12 12 14 19 26 58 60 55

14 13 16 24 40 57 69 56

14 17 22 29 51 87 80 62

18 22 37 56 68 109 103 77

24 35 55 64 81 104 113 92

49 64 78 87 103 121 120 101

72 92 95 98 112 100 103 99

54

Quantization

The eye is most sensitive to low frequencies (upper left corner), less sensitive to high frequencies (lower right corner).

JPEG standard defines 2 default quantization tables, one for luma (above), one for chroma.

Quality factor: How would scaling the quantization numbers affect the

image, say if I double them all? In most implementations, quality factor is the scaling

factor for the default quantization table.

55

Zig-Zag scan

This step linearizes the 8 8 block of DCT coefficients. It maps an 8 8 block to a 64-byte stream.

RLE and Entropy encoding methods are then applied on the byte stream.

Why zig-zag? It is to group the coefficients from low to high frequencies, so that zeros in high frequencies are grouped together. Consecutive

zeros would be effectively compressed using RLE. high frequencies can be truncated easily.

56

Zig-Zag scan

57

Entropy encoding

DC component encoded using predictive encoding DC coefficient determines the average color (or

intensity) of the 8 8 block. Between adjacent blocks, the variation is fairly small. Encode the difference between the current DC

coefficient and the one of the previous block. AC components encoded with RLE

The 63-number stream has lots of zeros in it. Encode as (skip,value) pairs, where skip is the number of

zeros and value is the next non-zero component.

58

Entropy encoding

convert the DCT coefficients after quantization into a compact binary sequence in 2 steps: forming an intermediate symbol sequence converting the sequence into binary using Huffman table

intermediate symbol sequence each AC coefficient is represented by a pair of symbols:

Symbol-1 (Runlength, Size) Symbol-2 (Amplitude)

59

AC encoding

Runlength is the # of consecutive 0-valued AC coefficients preceding a nonzero AC coefficient. Runlength is in the range 0 to 15.

Size is the # of bits used to encode the magnitude of Amplitude. Amplitude can use up to 10 bits.

Amplitude is the amplitude of the nonzero AC coefficient in the range of [-1024,+1023] 10 bits.

60

AC encoding

e.g., given the sequence:..., 0, 0, 0, 0, 0, 0, 476, … (6,9)(476) // 2 symbols

If Runlength > 15, then Symbol-1 (15,0) = 16 0’s e.g., what is the sequence represented by:

(15,0) (15,0) (7,4) (12)? (0,0) = End-of-Block symbol: all remaining

coefficients are 0’s.

61

DC encoding

Categorize DC values into Size (number of bits needed to represent, symbol-1) and the Amplitude (symbol-2).

If DC is 4, 3 bits are needed. Encode Size as a Huffman symbol, then the actual 3 bits.

Since DC are differentially encoded, its range is [-2048,2047].

Amplitude Size Codes -1,1 1 1=1, -1=0

-3,-2,2,3 2 3=11, 2=10 -3=00, -2=01

-7,...,-4,4,...,7 3 etc.

63

JPEG example

Nelson suggested the following program to generate the quantization table:

for (i=0;i<n;i++) for (j=0; j<n; j++) Q[i][j]= 1 + [(1+i+j) quality];

The JPEG standard proposes Huffman encoding tables. One example (partial): (0,0) EOB 1010 (3,1) 111010

(0,1) 00 (4,1) 111011

(0,2) 01 (5,2) 11111110111

(0,3) 100 (6,1) 1111011

(1,2) 11011 (7,1) 11111010

(2,1) 11100

64

Compression measures

Compression Ratio (CR): CR = Original data size / Compressed data size higher CR lower picture quality.

Wallace suggested a measure Nb = # of bits per pixel in the compressed image.

An observation: 0.25-0.5 bits/pixel: moderate to good quality; 0.5-0.75 bits/pixel: good to very good quality; 0.75-1.5 bits/pixel: excellent quality; 1.5-2.0 bits/pixel: usually indistinguishable from the original.

65

Compression and picture quality

Original DC only0.19 bpp

DC + 1-9 AC0.96 bpp

DC + 1-2 AC0.43 bpp

66

Lossless mode of JPEG compression

A special case of JPEG where there is no loss in the encoding process.

In this mode, image processing and quantization use a predictive technique instead of transformation encoding.

Neighboring pixels are taken as predictors, and the difference between the predicted and the actual values are encoded using Huffman methods.

67

Lossless JPEG

SourceData Predictor Entropy

EncoderComp.Data

TableSpecification

68

Lossless JPEG

Normally, pixel values do not vary by much except at intensity (color) edges. The differences have small values in most regions of the image. Effective entropy compression is possible.

69

Lossless JPEG

For each pixel, the predictor uses a linear combination of previously encoded neighbors. The typical predictor functions used are:

70

Lossless JPEG

2D predictors (4-7) usually do better than 1D predictors. (P0 is “no prediction”)

Typical compression ratio achieved is about 2:1.

71

Sequential encoding

In sequential encoding, the whole image is encoded and decoded in a single run. It allows decoding with immediate presentation, but in top-to-bottom sequence.

72

Progressive encoding

Progressive mode encodes and reconstructs the image with a very rough representation, and refines it during successive steps. Also known as layered coding.

73

Successive refinement

2 ways to successive refinement: Spectral selection. Send DC component for

entropy encoding, then first few ACs, then some more ACs, etc.

Successive approximation. Send all DCT coefficients in each run, but single bits within a coefficient are processed in different runs. The most-significant bits encoded first, followed by the less-significant bits.

75

Successive refinement

Original 7 MSBs of DC0.15 bpp

+5MSB of AC0.3 bpp

+7 MSB of AC0.8 bpp

76

Hierarchical mode

down-sample by factors of 2 in both directions, e.g., reduce 640480 to 320240 to 160120, etc.

Repeat the following process recursively until the full resolution image is compressed. Initially, encode the smallest image. Then at each level:

decode and up-sample the smaller image. encode the difference between the up-sampled and the original

images.

77

Hierarchical mode

original640x480 360x

240180x120down sample

down sample

JPEG

uncompressup sample

diff.

sum

JPEG

uncomp.

up sample

diff.JPEG

File:

78

Hierarchical mode

Since the original image is encoded at different resolutions, it requires more storage for multiple resolutions. Advantage: the picture is immediately available

at different resolutions. Scaling is cheap when display system works only with a lower resolution.

79

Wavelet coding

used in JPEG 2000 consider a one-dimensional array of values:

101, 102, 103, 104, 105, 106, 107, 108 we can represent these values by averages of sums and

differences: pair-wise sums:

(101+102)/2; (103+104)/2; (105+106)/2; (107+108)/2 pair-wise diffs:

(101-102)/2; (103-104)/2; (105-106)/2; (107-108)/2

put these sums and differences into a sequence: 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2

80

Wavelet transform

Note that the original values can be reconstructed by the sums and differences.

101 ½ 103 ½ 105 ½ 107 ½ -1/2 -1/2 -1/2 -1/2

101 102 103 104 105 106 107 108

addition

subtraction

81

Wavelet transform

Note that if we replace the four –1/2’s by 0’s, the recovered sequence is not too far off from the original:

Hence, quantization and RLE could be applied to effectively reduce the size of the sequence.

101 ½ 103 ½ 105 ½ 107 ½ 0 0 0 0

101 ½ 101 ½ 103 ½ 103 ½ 105 ½ 105 ½ 107 ½ 107 ½

82

Wavelet transform

recursively apply the idea to the averages … 101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2

83

Wavelet transform

recursively apply the idea to the averages … 101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2

2nd level details:average values offirst half and secondhalf

84

Wavelet transform

recursively apply the idea to the averages … 101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2

3rd level details:average values offirst pair, second pair,third pair, and finalpair

85

Wavelet transform

recursively apply the idea to the averages … 101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2

full details: all values

applywavelettransformto eachrow ofpixels

averages diff’s

applywavelettransformto eachcolumn ofpixels

moreimportantdata

lessimportantdata

92

JPEG vs. JPEG 2000

original JPEG 2000 at 0.27 bppJPEG at 0.27 bpp

author: Christopher M. Brislawn

93

JPEG vs. JPEG 2000

original JPEG 2000 at 1 bppJPEG at 1 bpp

94

JPEG vs. JPEG 2000

original JPEG 2000 at 0.5 bppJPEG at 0.5 bpp

Recommended