Image Enhancement - Noise models, denoising and sharpening · 2016-02-05 · Image Enhancement -...

Preview:

Citation preview

Image Enhancement - Noise models, denoising andsharpening

Rajiv Soundararajan

Department of Electrical Communication EngineeringIndian Institute of Science, Bangalore

February 4, 2016

1/22

Noise models

Types of noise - Gaussian, salt and pepper, quantization,photon counting, speckle

Types of models - additive or multiplicative

y = x+ z, y = xzInterchangeable depending on exponential or logarithmicdomainx dependent/independent on z

2/22

Gaussian noise

Univariate Gaussian pdf pZ(z) = 1√2πσ2

exp[− (z−µ)2

σ2

]Sum of large number of independent random variables -Gaussian distribution by central limit theoremThermal noise - sum of thermal vibrations of large number ofelectrons

(a) σ = 10 (b) σ = 30

3/22

Figure: σ = 30

4/22

Salt and pepper noise

Transmission of images over noisy channels - binary symmetricchannel with cross over probability εPixel value x =

∑B−1i=0 bi2

i, MSE due to MSB ε4B−1

compared to ε(4B−1 − 1)/3 for all other bits combinedSalt and pepper noise model - p(y = x) = 1− α,p(y = MAX) = α/2, p(y = MIN) = α/2

Figure: α = 0.05

5/22

Figure: α = 0.05

6/22

Photon counting noise

Images acquired by collecting photons, number of photonscollected modeled as Poisson random variableHigher the intensity (pixel value), higher the mean and higherthe variance (more noise in brighter regions)

7/22

Image denoising for Gaussian noise

Y = X + Z where X is the original image, Z ∼ N (0, σ2Z) is thenoise and Y is the output image

(a) Original Image (b) Noisy Image

8/22

Simple transform - Mean subtraction

Pixel value at location (i, j) given by y(i, j)

y(i, j) = µ(i, j) + [y(i, j)− µ(i, j)]

where

µ(i, j) =

i+M∑k=i−M

j+M∑l=j−M

w(k, l)y(k, l)

such thati+M∑

k=i−M

j+M∑l=j−M

w(k, l) = 1

We will refer to µ as low pass image and y1 = y − µ as high passimage

9/22

(c) High pass image (d) Low pass image

High pass image has more noise than low pass image. We candenoise by discarding high pass image - low pass filter the noisyimage

10/22

Low pass filters

Rectangular windows (averaging)

Design low pass filters in the frequency domain - truncatedsinc in the time domain

Gaussian filters

11/22

(e) Noisy Image (f) Filtered Image

Drawback - loss of details

12/22

Gaussian source model

Instead of discarding high pass image, process it further and thencombine with low pass image

x̂(i, j) = µ(i, j) + f(y(i, j)− µ(i, j)) = µ(i, j) + f(y1(i, j))

Suppose we use a Gaussian model for the high pass coefficients ofthe original image. Let us denote Y1 as random outputcorresponding to y1(i, j) described by

Y1 = X1 + Z

where X1 ∼ N (0, σ2X) , Z ∼ N (0, σ2Z). Here X1 refers to the highpass coefficient of the original image that we would like to estimateand Z refers to additive Gaussian noise in the high pass image.

13/22

MMSE estimation

The minimum mean squared estimate (MMSE) of X1 given Y1 isgiven by

X̂1 = E[X1|Y ] =σ2X

σ2X + σ2ZY1

The denoised image is given by

x̂(i, j) = µ(i, j) + f(y1(i, j)) = µ(i, j) +σ2X

σ2X + σ2Zy1(i, j)

Note that the estimate x̂(i, j) is a linear function of the image y.In order to obtain the denoised image, we need to estimate of σ2Xand σ2Z from the given noisy image.

14/22

(g) Low pass filtered Image MSE =216

(h) MMSE filtered Image MSE = 51

MMSE filtered image preserves more details

15/22

Generalized Gaussian model

High pass coefficients can be modeled better as a generalizedGaussian distribution

fX(x) =α

2βΓ(1/α)exp

[−(|x|β

)α]

-150 -100 -50 0 50 100 150 200

High pass coefficient

0

2

4

6

8

10

12

14

16

18lo

g fre

quency

Note that the empirical log histogram of the lighthouse image highpass coefficients does not show an inverted parabola, but rather alinear function of the high pass coefficients

16/22

Shrinkage estimators

Let us assume that the source X1 has a Laplacian distribution withparameter σX . The Laplacian distribution is a special case of thegeneralized Gaussian distribution with α = 1 and β = σX . Theoutput image is given by

Y1 = X1 + Z

where Z ∼ N (0, σ2Z). Maximizing the aposteriori probability yieldsa shrinkage estimator for x1(i, j). Mathematically

x̂1(i, j) = argmaxx

p(x|y1(i, j)) = sgn(y1(i, j))(|y1(i, j)| − t)+

where (a)+ = max{a, 0}, sgn(a) is the sign of a and t =σ2ZσX

√2.

Therefore the denoised image is given by

x̂(i, j) = µ(i, j)+f(y1(i, j)) = µ(i, j)+sgn(y1(i, j))(|y1(i, j)|−t)+

17/22

(i) MMSE filter MSE = 51 (j) Shrinkage estimator MSE = 46

Shrinkage estimator achieves lower mean squared error

18/22

SureShrink

Consider the following model where we assume that the high passcoefficient x1 is a constant

Y1 = x1 + Z1

Let us try to estimate x1 using a shrinkage estimator by optimizingthreshold t to minimize mean squared error E

[(x̂1(Y1)− x1)2

].

Let n be the number of high pass coefficients. Stein’s unbiasedrisk estimate is given by

SURE(t;y1) = nσ2Z + ||g(y1)||2 + 2σ2ZO · g(y1)

where y1 is the vector of all high pass coefficients and

g(y1) = sgn(y1)(|y1| − t) + y1

19/22

SureShrink

Since SURE(t;y1) is an unbiased estimate of the actual errorE[||x̂1 − x1||2

]over all high pass coefficients, it can be used as a

surrogate for actual error. Note that SURE(t;y1) does notdepend on x1, which is what we are trying to estimate.Thus the optimized threshold is given by

t∗ = argmint

SURE(y1; t)

D. L. Donoho, and I. M. Johnstone, ”Adapting to unknown smoothness via wavelet shrinkage,” Journal of the

american statistical association, vol. 90, no. 432, 1995

20/22

Multiscale denoising

Input image

Outputimage

21/22

Image Sharpening

HPF

aInput image

Outputimage

22/22

Recommended