37
CHRISTINE LEW DHEYANI MALDE EVERARDO URIBE YIFAN ZHANG SUPERVISORS: ERNIE ESSER YIFEI LOU BARCODE RECONITION TEAM

Christine Lew Dheyani Malde Everardo Uribe Yifan Zhang Supervisors: Ernie Esser Yifei Lou

  • Upload
    mindy

  • View
    68

  • Download
    0

Embed Size (px)

DESCRIPTION

BARCODE RECONITION TEAM. Christine Lew Dheyani Malde Everardo Uribe Yifan Zhang Supervisors: Ernie Esser Yifei Lou. UPC Barcode. What type of barcode? What is a barcode? Structure? Our barcode representation? Vector of 0s and 1s . Mathematical Representation. - PowerPoint PPT Presentation

Citation preview

Page 1: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

CHRISTINE LEWDHEYANI MALDEEVERARDO URIBEYIFAN ZHANGSUPERVISORS:ERNIE ESSERYIFEI LOU

BARCODE RECONITION TEAM

Page 2: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

UPC BARCODE

What type of barcode? What is a barcode? Structure?

Our barcode representation? Vector of 0s and 1s

Page 3: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

MATHEMATICAL REPRESENTATION

Barcode Distortion Mathematical Representation:

What is convolution? Every value in the blurred signal is given by the

same combination of nearby values in the original signal and the kernel determines these combinations.

Kernel For our case, the blur kernel k, or point spread

function, is assumed to be a GaussianNoise The noise we deal with is white Gaussian noise

Page 4: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

0.2 STANDARD DEVIATION

Page 5: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

0.5 STANDARD DEVIATION

Page 6: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

0.9 STANDARD DEVIATION

Page 7: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

DECONVOLUTIONWhat is deconvolution? It is basically solving for the clean barcode

signal, .

Difference between non-blind deconvolution and blind deconvolution: Non-blind deconvolution: we know how the

signal was blurred, ie: we assume k is known Blind deconvolution: we may know some or

no information about how the signal was blurred. Very difficult.

Page 8: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

SIMPLE METHODS OF DECONVOLUTIONThresholding Basically converting signal to binary signal,

seeing whether the amplitude at a specific point is closer to 0 or 1 and rounding to the value its closer to.

Wiener filter Classical method of reconstructing a signal

after being distorted, using known knowledge of kernel and noise.

Page 9: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

WIENER FILTERWe have: The Wiener Filter solves for:

Filter is easily described in frequency domain. Wiener filter defines , such that x = , where is the estimated original signal:

Note that if there is no noise, r =0, and So reduces to.

Page 10: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

0.7 STANDARD DEVIATION, 0.05 SIGMA NOISE

Page 11: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

0.7 STANDARD DEVIATION, 0.2 SIGMA NOISE

Page 12: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

0.7 STANDARD DEVIATION, 0.5 SIGMA NOISE

Page 13: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Non-blind Deblurring

using Yu Mao’s Method

By: Christine LewDheyani Malde

Page 14: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Overview• 2 general approaches:

o -Yifei (blind: don’t know blur kernel)o -Yu Mao (non-blind: know blur kernel

• General goal:o -Taking a blurry barcode with noise and making it as clear as possible

through gradient projection. o -Find method with best results and least error

Page 15: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Data Model• Method’s goal to solve

o Convex Modelo K: blur kernelo U: clear barcodeo B: blurry barcode with noise

• b = k*u + noise• Find the minimum through gradient projection • Exactly like gradient descent, only we project

onto [0,1] every iteration• Once we find min u, we can predict clear signal

Page 16: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Classical Method

• Compare with Wiener Filter in terms of error rateo Error rate: difference between reconstructed

signal and ground truth

Page 17: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Comparisons for Yu Mao’s Method

Yu Mao’s Gradient Projection Wiener Filter

Page 18: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Comparisons for Yu Mao’s Method (Cont.)

Wiener FilterYu Mao’s Gradient Projection

Page 19: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Jumps• How does the number of jumps affect the result? • What happens if we apply the amount of jumps to

the different methods of de-blurring?• Compared Yu Mao’s method & Wiener Filter• Created a code to calculate number of jumps• 3 levels of jumps:

o Easy: 4 jumpso Medium: 22 jumpso Hard: 45 jumps (regular barcode)

Page 20: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

•Created a code to calculate number of jumps:•Jump: when the binary goes from 0 to 1 or 1 to 0

•3 levels of jumps:o Easy: 4 jumpso Medium: 22 jumpso Hard: 45 jumps o (regular barcode)

What are Jumps

Page 21: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

•How does the number of jumps affect the result (clear barcode)?

•Compare Yu Mao’s method & Weiner Filter

Analyzing Jumps

Page 22: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Comparison for Small Jumps (4 jumps)

Yu Mao’s Gradient Projection Wiener Filter

Page 23: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Comparison for Hard Jumps (45 jumps)

Wiener FilterYu Mao’s Gradient Projection

Page 24: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Wiener Filter with Varying Jumps

- More jumps, greater error- Drastically gets worse with more jumps

Page 25: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Yu Mao's Gradient Projection with Varying Jumps

- More jumps, greater error- Slightly gets worse with more jumps

Page 26: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Conclusion Yu Mao's method better overall:

produces less errorfrom jump cases: consistent error rate of 20%-30%

Wiener filter did not have a consistent error rate:

consistent only for small/medium jumpsat 45 jumps, 40%- 50% error rate

Page 27: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

BLIND DECONVOLUTION

Yifan ZhangEverardo Uribe

Page 28: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

DERIVATION OF MODELWe have:

For our approach, we assume that , the kernel, is a symmetric point-spread function. Since its symmetric, flipping it will produce an equivalent:

We flip entire equation and began reconfiguration:

Y and N are matrix representations

Page 29: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

DERIVATION OF MODELSignal Segmentation & Final Equation: • Middle bars are always the same, represented

as vector [0 1 0 1 0] in our case.

We have to solve for x in:

Page 30: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Gradient Projection

•Projection of Gradient Descent ( first-order optimization)•Advantage:

• Allows us to set a range•Disadvantage:

• Takes very long time• Not extremely accurate results• Underestimate signal

𝑢𝑛+1=𝜋 [ 0,1] (𝑢𝑛−𝑑𝑡 ∇𝐹 (𝑢𝑛))

Page 31: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou
Page 32: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Least Squares

• estimates unknown parameters• minimizes sum of squares of errors• considers observational errors• • •

Page 33: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Least Squares (cont.)• Advantages:

• return results faster than other methods

• easy to implement• reasonably accurate results• great results for low and high noise

• Disadvantage:• doesn’t work well when there are

errors in

Page 34: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou
Page 35: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Total Least Squares

• Least squares data modeling• Also considers errors of •

• SVD (C)• Singular Value Decomposition

• Factorization•

Page 36: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou

Total Least Squares (Cont.)• Advantage:

• works on data in which others does not• better than least squares when more

errors in • Disadvantages:

• doesn’t work for most data not in extremities

• overfits data• not accurate• takes a long time

x

Page 37: Christine Lew Dheyani Malde Everardo  Uribe Yifan  Zhang Supervisors: Ernie  Esser Yifei  Lou