976

Click here to load reader

Digital image processing 3 e

Embed Size (px)

DESCRIPTION

Digital image processing

Citation preview

Page 1: Digital image processing   3 e

Digital Image Processing Third Edition

Rafael C. Gonzalez University of Tennessee

Richard E. Woods NledData Interactive

Pearson International Edition prepared by Pearson Education

PEARSON

Prentice Hall

Page 2: Digital image processing   3 e

j j j j j j j j j j j j j j j j j j j j j j j j

j j j j j j j j j j j j

j j j j j j

j j j

j j

j j j

Page 3: Digital image processing   3 e

To Samantha and To Janice, David, and Jonathan

.,-

/

Page 4: Digital image processing   3 e
Page 5: Digital image processing   3 e

Contenls

1

Preface 15 Acknowledgments'-19 The Book Web Site 20 About the Authors 21

Introduction 23 1.1 What Is Digital Image Processing? 23 1.2 The Origins of Digital Image Processing 25 1.3 Examples of Fields that Use Digital Image Processing 29

1.3.1 Gamma-Ray Imaging 30 1.3.2 X-Ray Imaging 31 1.3.3 Imaging in the Ultraviolet Band 33 1.3.4 Imaging in the Visible and Infrared Bands 34 1.3.5 Imaging in the Microwave Band 40 1.3.6 Imaging in the Radio Band 42 1.3.7 Examples in which Other Imaging Modalities Are Used 42

1.4 Fundamental Steps in Digital Image Processing 47 1.5 Components of an Image Processirig System 50

Summary 53 References and Further Reading 53

2 Digital Image Fundamentals 57 2.1 Elements of Visual Perception 58

2.1.1 Structure of the Human Eye 58 2.1.2 Image Formation in the Eye 60 2.1.3 Brightness Adaptation and Discrimination 61

2.2 Light and the Electromagnetic Spectrum 65 2.3 Image Sensing and Acquisition 68

2.3.1 Image Acquisition Using a Single Sensor 70 2.3.2 Image Acquisition Using Sensor Strips 70 2.3.3 Image Acquisition Using Sensor Arrays 72 2.3.4 A Simple Image Formation Model 72

2.4 Image Sampling and Quantization 74 2.4.1 Basic Concepts in Sampling and Quantization 74 2.4.2 Representing Digital Images 77 2.4.3 Spatial and Intensity Resolution 81 2.4.4 Image Interpolation 87

5

/

Page 6: Digital image processing   3 e

· 6 • Contents

2.5 Some Basic Relationships between Pixels 90 2.5.1 Neighbors of a Pixel 90 2.5.2 Adjacency, Connectivity, Regions, and Boundaries 90 2.5.3 Distance Measures 93

2.6 An Introduction to the Mathematical Tools Used in Digital Image Processing 94

3

2.6.1 Array versus Matrix Operations 94 2.6.2 Linear versus Nonlinear Operations 95 . 2.6.3 Arithmetic Operations 96 2.6.4 Set and Logical Operations 102 2.6.5 Spatial Operations 107 2.6.6 Vector and Matrix Operations 114 2.6.7 Image Transforms 115 2.6.8 Probabilistic Methods 118 Summary 120 References and Further Reading 120 Problems 121

Intensity Transformations and Spatial Filtering 126

3.1 Background 127 3.1.1 The Basics of Intensity Transformations and Spatial Filtering 127 3.1.2 About the Examples in This Chapter 129

3.2 Some Basic Intensity Transformation Functions 129 3.2.1 Image Negatives 130 3.2.2 Log Transformations 131 3.2.3 Power-Law (Gamma) Transformations 132 3.2.4 Piecewise-Linear Transformation Functions 137

3.3 Histogram Processing 142 3.3.1 Histogram Equalization 144 3.3.2 Histogram Matching (Specification) 150 3.3.3 Local Histogram Processing 161 3.3.4 Using Histogram Statistics for Image Enhancement 161

3.4 Fundamentals of Spatial Filtering 166 3.4.1 The Mechanics of Spatial Filtering 167 3.4.2 Spatial Correlation and Convolution 168 3.4.3 Vector Representation of Linear Filtering 172 3.4.4 Generating Spatial Filter Masks 173'

3.5 Smoothing Spatial Filters 174 3.5.1 Smoothing Linear Filters 174 3.5.2 Order-Statistic (Nonlinear) Filters 178

3.6 Sharpening Spatial Filters 179 3.6.1 Foundation 180 3.6.2 Using the Second Derivative for Image Sharpening-The

Laplacian 182

Page 7: Digital image processing   3 e

3.6.3 Unsharp Masking and Highboost Filtering 184 3.6.4 Using First-Order Derivatives for (Nonlinear) Image

Sharpening-The Gradient 187 3.7 Combining Spatial Enhancement Methods 191 3.8 Using Fuzzy Techniques for Intensity Transformations and Spatial

Filtering 195

4

3.B.1 Introduction 195 3.B.2 Principles of fuzzy Set Theory 196 3.8.3 Using Fuzzy Sets 200 3.8.4 Using Fuzzy Sets for Intensity Transformations 208 3.B.5 Using Fuzzy Sets for Spatial Filtering 211 Summary 214 References and Further Reading 214 Problems 215

Filtering in the Frequency Domain 221. 4.1 Background 222

4.1.1 A Brief History of the Fourier Series and Transform 222 4.1.2 About the Examples in this Chapter 223

4.2 Preliminary Concepts 224 4.2.1 Complex Numbers 224 4.2.2 Fourier Series 225 4.2.3 Impulses and Their Sifting Property 225 4.2.4 The Fourier Transform of Functions of One Continuous

Variable 227 4.2.5 Convolution 231

4.3 Sampling and the Fourier Transform of Sampled Functions 233 4.3.1 Sampling 233 4.3.2 The Fourier Transform of Sampled Functions 234 4.3.3 The Sampling Theorem 235 4.3.4 Aliasing 239 4.3.5 Function Reconstruction (Recovery) from Sampled Data 241

4.4 The Discrete Fourier Transform (DFT) of One Variable 242 4.4.1 Obtaining the OFT from the Continuous Transform of a

Sampled Function 243 4.4.2 Relationship Between the Sampling and Frequency

Intervals 245 4.5 Extension to Functions of Two Variables 247

4.5.1 The 2-0 Impulse and Its Sifting Property 247 4.5.2 The 2-0 Continuous Fourier Transform Pair 248 4.5.3 Two-Dimensional Sampling and the 2-0 Sampling

Theorem 249 4.5.4 Aliasing in Images 250 4.5.5 The 2-0 Discrete Fourier Transform and Its Inverse 257

• Contents 7

Page 8: Digital image processing   3 e

8 • Contents

4.6 Some Properties of the 2-D Discrete Fourier Transform 258 4.6.1 Relationships Between Spatial and Frequency Intervals 258 4.6.2 Translation and Rotation 258 4.6.3 Periodicity 259 4.6.4 Symmetry Properties 261 4.6.5 Fourier Spectrum and Phase Angle 267 4.6.6 The 2-D Convolution Theorem 271 4.6.7 Summary of 2-D Discrete Fourier tイ。ョウヲッイセ@ Properties 275

4.7 The Basics of Filtering in the Frequency Domain 277 4.7.1 Additional Characteristics of the Frequency Domain 277 4.7.2 Frequency Domain Filtering Fundamentals 279 4.7.3 Summary of Steps for Filtering in the Frequency Domain 285 4.7.4 Correspondence Between Filtering in the Spatial and Frequency

Domains 285 4.8 Image Smoothing Using Frequency Domain Filters 291

4.8.1 Ideal Lowpass Filters 291 4.8.2 Butterworth Lowpass Filters 295 4.8.3 Gaussian Lowpass Filters 298 4.8.4 Additional Examples of Lowpass Filtering 299

4.9 Image Sharpening Using Frequency Domain Filters 302 4.9.1 Ideal Highpass Filters 303 4.9.2 Butterworth Highpass Filters 306 4.9.3 Gaussian Highpass Filters 307 4.9.4 The Laplacian in the Frequency Domain 308 4.9.5 Unsharp Masking, Highboost Filtering, and High-Frequency-

Emphasis Filtering 310 4.9.6 Homomorphic Filtering 311

4.10 Selective Filtering 316 4.10.1 Bandreject and Bandpass Filters 316 4.10.2 Notch Filters 316

4.11 Implementation 320

5

4.11.1 Separability of the 2-D DFT 320 4.11.2 Computing the IDFT Using a DFT Algorithm 321 4.11.3 The Fast Fourier Transform (FFT) 321 4.11.4 Some Comments on Filter Design 325 Summary 325 References and Further Reading 326 Problems 326

Image Restoration and Reconstruction 333 5.1 A Model of the Image Degradation/Restoration Process 334 5.2 Noise Models 335

5.2.1 Spatial and Frequency Properties of Noise 335 5.2.2 Some Important Noise Probability Density Functions 336

Page 9: Digital image processing   3 e

5.2.3 Periodic Noise 340 5.2.4 Estimation of Noise Parameters 341

5.3 Restoration in the Presence of Noise Only-Spatial Filtering 344 5.3.1 Mean Filters 344 5.3.2 Order-Statistic Filters 347 5.3.3 Adaptive Filters 352

5.4 Periodic Noise Reduction by Frequency Domain Filtering 357 5.4.1 Bandreject Filters 357 5.4.2 Bandpass Filters 358 5.4.3 Notch Filters 359 5.4.4 Optimum Notch Filtering 360

5.5 Linear, Position-Invariant Degradations 365 5.6 Estimating the Degradation Function 368

5.6.1 Estimation by Image Observation 368 5.6.2 Estimation by Experimentation 369 5.6.3 Estimation by Modeling 369

5.7 Inverse Filtering 373 5.8 Minimum Mean Square Error (Wiener) Filtering 374 5.9 Constrained Least Squares Filtering 379 5.10 Geometric Mean Filter 383 5.11 Image Reconstruction from Projections 384

5.11.1 Introduction 384

6

5.11.2 Principles of Computed Tomography (CT) 387 5.11.3 Projections and the Radon Transform 390 5.11.4 The Fourier-Slice Theorem 396 5.11.5 Reconstruction Using Parallel-Beam Filtered Backprojections

397 5.11.6 Reconstruction Using Fan-Beam Filtered Backprojections 403 Summary 409 References and Further Reading 410 Problems 411

Color Image Processing 416 6.1 Color Fundamentals 417 6.2 Color Models 423

6.2.1 The RGB Color Model 424 6.2.2 The CMY and CMYK Color Models 428 6.2.3 The HSI Color Model 429

6.3 Pseudocolor Image Processing 436 6.3.1 Intensity Slicing 437 6.3.2 Intensity to Color Transformations 440

6.4 Basics of Full-Color Image Processing 446 6.5 Color Transformations 448

6.5.1 Formulation 448 6.5.2 Color Complements 452

• Contents 9

Page 10: Digital image processing   3 e

10 • Contents

6.5.3 Color Slicing 453 6.5.4 Tone and Color Corrections 455 6.5.5 Histogram Processing 460

6.6 Smoothing and Sharpening 461 6.6.1 Color Image Smoothing 461 6.6.2 Color Image Sharpening 464

6.7 Image Segmentation Based on Color 465 6.7.1 Segmentation in HSI Color Space 465 6.7.2 Segmentation in RGB Vector Space 467 6.7.3 Color Edge Detection 469

6.8 Noise in Color Images 473 6.9 Color Image Compression 476

Summary 477 References and Further Reading 478 Problems 478

7 Wavelets and Multiresolution Processing 483 7.1 Background 484

7.1.1 Image Pyramids 485 7.1.2 Subband Coding 488 7.1.3 The Haar Transform 496

7.2 Multiresolution Expansions 499 7.2.1 Series Expansions 499 7.2.2 Scaling Functions 501 7.2.3 Wavelet Functions 505

7.3 Wavelet Transforms in One Dimension 508 7.3.1 The Wavelet Series Expansions 508 7.3.2 The Discrete Wavelet Transform 510 7.3.3 The Continuous Wavelet Transform 513

7.4 The Fast Wavelet Transform 515 7.5 Wavelet Transforms in Two Dimensions 523 7.6 Wavelet Packets 532

8

Summary 542 References and Further Reading 542 Problems 543

Image Compression 547 8.1 Fundamentals 548

8.1.1 Coding Redundancy 550 8.1.2 Spatial and Temporal Redundancy 551 8.1.3 Irrelevant Information 552 8.1.4 Measuring Image Information 553 8.1.5 Fidelity Criteria 556

Page 11: Digital image processing   3 e

8.1.6 Image Compression Models 558 8.1.7 Image Format,s, Containers, and Compression Standards 560

8.2 Some Basic Compression Methods 564 8.2.1 Huffman Coding 564 8.2.2 Golomb Coding 566 8.2.3 Arithmetic Coding 570 8.2.4 LZW Coding 573 8.2.5 Run-Length Qlding 575 8.2.6 Symbol-Based Coding 581 8.2.7 Bit-Plane Coding 584 8.2.8 Block Transform Coding 588 8.2.9 Predictive Coding 606 8.2.10 Wavelet Coding 626

8.3 Digital Image Watermarking 636 Summary 643

9

References and Further Reading 644 Problems 645

Morphological Image Processing 9.1 Preliminaries 650 9.2 Erosion and Dilation 652

9.2.1 Erosion 653 9.2.2 Dilation 655 9.2.3 Duality 657

9.3 Opening and Closing 657 9.4 The Hit-or-Miss Transformation 662 9.5 Some Basic Morphological Algorithms 664

9.5.1 Boundary Extraction 664 9.5.2 Hole Filling 665 9.5.3 Extraction of Connected Components 667 9.5.4 Convex Hull 669 9.5.5 Thinning 671 9.5.6 Thickening 672 9.5.7 Skeletons 673 9.5.8 Pruning 676 9.5.9 Morphological Reconstruction 678

649

9.5.10 Summary of Morphological Operations on Binary Images 684 9.6 Gray-Scale Morphology 687

9.6.1 Erosion and Dilation 688 9.6.2 Opening and Closing 690 9.6.3 Some Basic Gray-Scale Morphological Algorithms 692 9.6.4 Gray-Scale Morphological Reconstruction 698 Summary 701 References and Further Reading 701 Problems 702

• Contents 11

Page 12: Digital image processing   3 e

12 • Contents

1 0 Image Segmentation 711 10.1 Fundamentals 712 10.2 Point, Line, and Edge Detection 714

10.2.1 Background 714 10.2.2 Detection of Isolated Points 718 10.2.3 Line Detection 719 10.2.4 Edge Models 722 10.2.5 Basic Edge Detection 728 10.2.6 More Advanced Techniques for Edge Detection 736 10.2.7 Edge Linking and Boundary Detection 747

10.3 Thresholding 760 10.3.1 Foundation 760 10.3.2 Basic Global Thresholding 763 10.3.3 Optimum Global Thresholding Using Otsu's Method 764 10.3.4 Using Image Smoothing to Improve Global Thresholding 769 10.3.5 Using Edges to Improve Global Thresholding 771 10.3.6 Multiple Thresholds 774 10.3.7 Variable Thresholding 778 10.3.8 Multivariable Thresholding 783

10.4 Region-Based Segmentation 785 10.4.1 Region Growing 785 10.4.2 Region Splitting and Merging 788

10.5 Segmentation Using Morphological Watersheds 791 10.5.1 Background 791 10.5.2 Dam Construction 794 10.5.3 Watershed Segmentation Algorithm 796 10.5.4 The Use of Markers 798

10.6 The Use of Motion in Segmentation 800 10.6.1 Spatial Techniques 800 10.6.2 Frequency Domain Techniques 804 Summary 807 References and Further Reading 807 Problems 809

11 Representation and Description 817 11.1 Representation 818

11.1.1 Boundary (Border) Following 818 11.1.2 Chain Codes 820 11.1.3 Polygonal Approximations Using Minimum-Perimeter

Polygons 823 11.1.4 Other Polygonal Approximation Approaches 829 11.1.5 Signatures 830

Page 13: Digital image processing   3 e

11.1.6 Boundary Segments 832 11.1.7 Skeletons 834

11.2 Boundary Descriptors 837 11.2.1 Some Simple Descriptors 837 11.2.2 Shape Numbers 838 11.2.3 Fourier Descriptors 840 11.2.4 Statistical Moments 843

11.3 Region.al Descriptots 844 11.3.1 Some Simple Descriptors 844 11.3.2 Topological Descriptors 845 11.3.3 Texture 849 11.3.4 Moment Invariants 861

11.4 Use of Principal Components for Description 864 11.5 Relational Descriptors 874

Summary 878 References and Further Reading 878 Problems 879

12 Object Recognition 883 12.1 Patterns and Pattern Classes 883 12.2 Recognition Based on Decision-Theoretic Methods 888

12.2.1 Matching 888 12.2.2 Optimum Statistical Classifiers 894 12.2.3 Neural Networks 904

12.3 Structural Methods 925 12.3.1 Matching Shape Numbers 925 12.3.2 String Matching 926 Summary 928 References and Further Reading 928 Problems 929

Appendix A 932

Bibliography 937

Index 965

• Contents 13

Page 14: Digital image processing   3 e

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

Page 15: Digital image processing   3 e

Preface

When something can be read without effort,

great effort has gone into its writing.

Enrique Jardiel Poncela

This edition of Digital Image Processing is a major revision of the book. As in the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992 and 2002 edi-tions by Gonzalez and Woods, this fifth-generation edition was prepared with students and instructors in mind. The principal objectives of the book continue to be to provide an introduction to basic concepts and methodologies for digi-tal image processing, and to develop a foundation that can be used as the basis for further study and research in this field. To achieve these objectives, we focused again on material that we believe is fundamental and whose scope of application is not limited to the solution of specialized problems. The mathe-matical complexity of the book remains at a level well within the grasp of college seniors and first-year graduate students who have introductory prepa-ration in mathematical analysis, vectors, matrices, probability, statistics; linear systems, and computer programming. The book Web site provides tutorials to support readers needing a review of this background material.

One of the principal reasons this book has been the world leader in its field for more than 30 years is the level of attention we pay to the changing educa-tional needs of our readers. The present edition is based on the most extensive survey we have ever conducted. The survey involved faculty, students, and in-dependent readers of the book in 134 institutions from 32 countries. The major findings of the survey indicated a need for:

• A more comprehensive introduction early in the book to the mathemati-cal tools used in image processing.

• An expanded explanation of histogram processing techniques. • Stating complex algorithms in step-by-step summaries. • An expanded explanation of spatial correlation and convolution. • An introduction to fuzzy set theory and its application to image processing. • A revision of the material dealing with the frequency domain, starting

with basic principles and showing how the discrete Fourier transform fol-lows from data sampling.

• Coverage of computed tomography (CT). • Clarification of basic concepts in the wavelets chapter. • A revision of the data compression chapter to include more video com-

pression techniques, updated standards, and watermarking. • Expansion of the chapter on morphology to include morphological recon-

struction and a revision of gray-scale morphology.

15

Page 16: Digital image processing   3 e
Page 17: Digital image processing   3 e
Page 18: Digital image processing   3 e
Page 19: Digital image processing   3 e
Page 20: Digital image processing   3 e
Page 21: Digital image processing   3 e
Page 22: Digital image processing   3 e
Page 23: Digital image processing   3 e
Page 24: Digital image processing   3 e
Page 25: Digital image processing   3 e
Page 26: Digital image processing   3 e
Page 27: Digital image processing   3 e
Page 28: Digital image processing   3 e
Page 29: Digital image processing   3 e
Page 30: Digital image processing   3 e
Page 31: Digital image processing   3 e
Page 32: Digital image processing   3 e
Page 33: Digital image processing   3 e
Page 34: Digital image processing   3 e
Page 35: Digital image processing   3 e
Page 36: Digital image processing   3 e
Page 37: Digital image processing   3 e
Page 38: Digital image processing   3 e
Page 39: Digital image processing   3 e
Page 40: Digital image processing   3 e
Page 41: Digital image processing   3 e
Page 42: Digital image processing   3 e
Page 43: Digital image processing   3 e
Page 44: Digital image processing   3 e
Page 45: Digital image processing   3 e
Page 46: Digital image processing   3 e
Page 47: Digital image processing   3 e
Page 48: Digital image processing   3 e
Page 49: Digital image processing   3 e
Page 50: Digital image processing   3 e
Page 51: Digital image processing   3 e
Page 52: Digital image processing   3 e
Page 53: Digital image processing   3 e
Page 54: Digital image processing   3 e
Page 55: Digital image processing   3 e
Page 56: Digital image processing   3 e
Page 57: Digital image processing   3 e
Page 58: Digital image processing   3 e
Page 59: Digital image processing   3 e
Page 60: Digital image processing   3 e
Page 61: Digital image processing   3 e
Page 62: Digital image processing   3 e
Page 63: Digital image processing   3 e
Page 64: Digital image processing   3 e
Page 65: Digital image processing   3 e
Page 66: Digital image processing   3 e
Page 67: Digital image processing   3 e
Page 68: Digital image processing   3 e
Page 69: Digital image processing   3 e
Page 70: Digital image processing   3 e
Page 71: Digital image processing   3 e
Page 72: Digital image processing   3 e
Page 73: Digital image processing   3 e
Page 74: Digital image processing   3 e
Page 75: Digital image processing   3 e
Page 76: Digital image processing   3 e
Page 77: Digital image processing   3 e
Page 78: Digital image processing   3 e
Page 79: Digital image processing   3 e
Page 80: Digital image processing   3 e
Page 81: Digital image processing   3 e
Page 82: Digital image processing   3 e
Page 83: Digital image processing   3 e
Page 84: Digital image processing   3 e
Page 85: Digital image processing   3 e
Page 86: Digital image processing   3 e
Page 87: Digital image processing   3 e
Page 88: Digital image processing   3 e
Page 89: Digital image processing   3 e
Page 90: Digital image processing   3 e
Page 91: Digital image processing   3 e
Page 92: Digital image processing   3 e
Page 93: Digital image processing   3 e
Page 94: Digital image processing   3 e
Page 95: Digital image processing   3 e
Page 96: Digital image processing   3 e
Page 97: Digital image processing   3 e
Page 98: Digital image processing   3 e
Page 99: Digital image processing   3 e
Page 100: Digital image processing   3 e
Page 101: Digital image processing   3 e
Page 102: Digital image processing   3 e
Page 103: Digital image processing   3 e
Page 104: Digital image processing   3 e
Page 105: Digital image processing   3 e
Page 106: Digital image processing   3 e
Page 107: Digital image processing   3 e
Page 108: Digital image processing   3 e
Page 109: Digital image processing   3 e
Page 110: Digital image processing   3 e
Page 111: Digital image processing   3 e
Page 112: Digital image processing   3 e
Page 113: Digital image processing   3 e
Page 114: Digital image processing   3 e
Page 115: Digital image processing   3 e
Page 116: Digital image processing   3 e
Page 117: Digital image processing   3 e
Page 118: Digital image processing   3 e
Page 119: Digital image processing   3 e
Page 120: Digital image processing   3 e
Page 121: Digital image processing   3 e
Page 122: Digital image processing   3 e
Page 123: Digital image processing   3 e
Page 124: Digital image processing   3 e
Page 125: Digital image processing   3 e
Page 126: Digital image processing   3 e
Page 127: Digital image processing   3 e
Page 128: Digital image processing   3 e
Page 129: Digital image processing   3 e
Page 130: Digital image processing   3 e
Page 131: Digital image processing   3 e
Page 132: Digital image processing   3 e
Page 133: Digital image processing   3 e
Page 134: Digital image processing   3 e
Page 135: Digital image processing   3 e
Page 136: Digital image processing   3 e
Page 137: Digital image processing   3 e
Page 138: Digital image processing   3 e
Page 139: Digital image processing   3 e
Page 140: Digital image processing   3 e
Page 141: Digital image processing   3 e
Page 142: Digital image processing   3 e
Page 143: Digital image processing   3 e
Page 144: Digital image processing   3 e
Page 145: Digital image processing   3 e
Page 146: Digital image processing   3 e
Page 147: Digital image processing   3 e
Page 148: Digital image processing   3 e
Page 149: Digital image processing   3 e
Page 150: Digital image processing   3 e
Page 151: Digital image processing   3 e
Page 152: Digital image processing   3 e
Page 153: Digital image processing   3 e
Page 154: Digital image processing   3 e
Page 155: Digital image processing   3 e
Page 156: Digital image processing   3 e
Page 157: Digital image processing   3 e
Page 158: Digital image processing   3 e
Page 159: Digital image processing   3 e
Page 160: Digital image processing   3 e
Page 161: Digital image processing   3 e
Page 162: Digital image processing   3 e
Page 163: Digital image processing   3 e
Page 164: Digital image processing   3 e
Page 165: Digital image processing   3 e
Page 166: Digital image processing   3 e
Page 167: Digital image processing   3 e
Page 168: Digital image processing   3 e
Page 169: Digital image processing   3 e
Page 170: Digital image processing   3 e
Page 171: Digital image processing   3 e
Page 172: Digital image processing   3 e
Page 173: Digital image processing   3 e
Page 174: Digital image processing   3 e
Page 175: Digital image processing   3 e
Page 176: Digital image processing   3 e
Page 177: Digital image processing   3 e
Page 178: Digital image processing   3 e
Page 179: Digital image processing   3 e
Page 180: Digital image processing   3 e
Page 181: Digital image processing   3 e
Page 182: Digital image processing   3 e
Page 183: Digital image processing   3 e
Page 184: Digital image processing   3 e
Page 185: Digital image processing   3 e
Page 186: Digital image processing   3 e
Page 187: Digital image processing   3 e
Page 188: Digital image processing   3 e
Page 189: Digital image processing   3 e
Page 190: Digital image processing   3 e
Page 191: Digital image processing   3 e
Page 192: Digital image processing   3 e
Page 193: Digital image processing   3 e
Page 194: Digital image processing   3 e
Page 195: Digital image processing   3 e
Page 196: Digital image processing   3 e
Page 197: Digital image processing   3 e
Page 198: Digital image processing   3 e
Page 199: Digital image processing   3 e
Page 200: Digital image processing   3 e
Page 201: Digital image processing   3 e
Page 202: Digital image processing   3 e
Page 203: Digital image processing   3 e
Page 204: Digital image processing   3 e
Page 205: Digital image processing   3 e
Page 206: Digital image processing   3 e
Page 207: Digital image processing   3 e
Page 208: Digital image processing   3 e
Page 209: Digital image processing   3 e
Page 210: Digital image processing   3 e
Page 211: Digital image processing   3 e
Page 212: Digital image processing   3 e
Page 213: Digital image processing   3 e
Page 214: Digital image processing   3 e
Page 215: Digital image processing   3 e
Page 216: Digital image processing   3 e
Page 217: Digital image processing   3 e
Page 218: Digital image processing   3 e
Page 219: Digital image processing   3 e
Page 220: Digital image processing   3 e
Page 221: Digital image processing   3 e

I セ@

.J

Filtering in the Frequency Domain

Preview

Filter: A device or material for suppressing

or minimizing waves or oscillations of certain

frequencies.

Frequency: The number of times that a periodic

function repeats the same sequence of values during

a unit variation of the independent variable.

Webster's New Collegiate Dictionary

Although significant effort was devoted in the previous chapter to spatial fil-tering, a thorough understanding of this area is impossible without having at least a working knowledge of how the Fourier transform and the frequency domain can be used for image filtering. You can develop a solid understanding of this topic without having to become a signal processing expert. The key lies in focusing on the fundamentals and their relevance to digital image process-ing. The notation, usually a source of trouble for beginners, is clarified signifi-cantly in this chapter by emphasizing the connection between image characteristics and the mathematical tools used to represent them. This chap-ter is concerned primarily with establishing a foundation for the Fourier trans-form and how it is used in basic image filtering. Later, in Chapters 5, 8,10, and 11, we discuss other applications of the Fourier transform. We begin the dis-cussion with a brief outline of the origins of the Fourier transform and its im-pact on countless branches of mathematics, science, and engineering. Next, we start from basic principles of function sampling and proceed step-by-step to derive the one-and two-dimensional discrete Fourier transforms, the basic sta-ples of frequency domain processing. During this development, we also touch upon several important aspects of sampling, such as aliasing, whose treatment requires an understanding of the frequency domain and thus are best covered in this chapter. This material is followed hy a formulation of filtering in the fre-quency domain and the of sections that parallel the spatial

221

Page 222: Digital image processing   3 e

222 CIIap.er 4 • Filtering in the Frequency Domain

smoothing and sharpening filtering techniques discussed in Chapter 3. We con-clude the chapter with a discussion of issues related to implementing the Fourier transform in the context of image processing. Because the material in Sections 4.2 through 4.4 is basic background, readers familiar with the con-cepts of 1-D signal processing, including the Fourier transform, sampling, alias-ing, and the convolution theorem, can proceed to Section 4.5, where we begin a discussion of the 2-D Fourier transform and its application to digital image processing.

HI Background

4.1.1 A Brief History of the Fourier Series and Transform The French mathematician Jean Baptiste Joseph Fourier was born in 1768 in the town of Auxerre, about midway between Paris and Dijon. The contribution for which he is most remembered was outlined in a memoir in 1807 and pub-lished in 1822 in his book, La Theorie Analitique de la Chaleur (The Analytic Theory of Heat). This book was translated into English 55 years later by Free-man (see Freeman [1878]). Basically, Fourier's contribution in this field states that any periodic function can be expressed as the sum of sines and/or cosines of different frequencies, each multiplied by a different coefficient (we now call this sum a Fourier series). It does not matter how complicated the function is; if it is periodic and satisfies some mild mathematical conditions, it can be rep-resented by such a sum. This is now taken for granted but, at the time it first appeared, the concept that complicated functions could be represented as a sum of simple sines and cosines was not at all intuitive (Fig. 4.1), so it is not sur-prising that Fourier's ideas were met initially with skepticism.

Even functions that are not periodic (but whose area under the curve is fi-nite) can be expressed as the integral of sines and/or cosines multiplied by a weighing function. The formulation in this case is the Fourier transform, and its utility is even greater than the Fourier series in many theoretical and applied disciplines. Both representations share the important characteristic that a function, expressed in either a Fourier series or transform, can be reconstruct-ed (recovered) completely via an inverse process, with no loss of information. This is one of the most important characteristics of these representations be-cause it allows us to work in the "Fourier domain" and then return to the orig-inal domain of the function without losing any information. Ultimately, it was the utility of the Fourier series and transform in solving practical problems that made them widely studied and used as fundamental tools.

The initial application of Fourier's ideas was m the field of heat diffusion, where they allowed the formulation of differential equations representing heat flow in such a way that solutions could be obtained for the first time. During the past century. and especially in the past 50 years, entire industries and academic disciplines have flourished as a result of Fourier's ideas. The advent of digital computers and the "discovery" of a fast Fourier transform (FFf) algorithm in the early 1960s (more about this later) revolutionized the field of signal process-ing. These two core technologies allowed for the first time practical processing of

Page 223: Digital image processing   3 e

4.1 • Background 223

FIGURE 4.1 The function at the bottom is the sum of the four functions above it. Fourier's idea in 1807 that periodic functions could be represented as a weighted sum of sines and cosines was met with skepticism.

a host of signals of exceptional importance, ranging from medical monitors and scanners to modern electronic communications.

We will be dealing only with functions (images) of finite duration, so the Fourier transform is the tool in which we are interested. The material in the following section introduces the Fourier transform and the frequency domain. It is shown that Fourier techniques provide a meaningful and practical way to study and implement a host of image processing approaches. In some cases, these approaches are similar to the ones we developed in Chapter 3.

4.1.'1 About the Examples in this Chapter As in Chapter 3, most of the image filtering examples in this chapter deal with image enhancement. For example, smoothing and sharpening are traditionally associated with image enhancement, as are techniques for contrast manipula-tion. By its very nature, beginners in digital image processing find enhance-ment to be interesting and relatively simple to understand. Therefore, using

Page 224: Digital image processing   3 e

224 Qapt ... 4 • Filtering in the Frequency Domain

examples from image enhancement in this chapter not only saves having an extra chapter in the book but, more importantly, is an effective tool for intro-ducing newcomers to filtering techniques in the frequency domain. We use frequency domain processing methods for other applications in Chapters 5,8, 10, and 11.

m Preliminary Concepts

In order to simplify the progression of ideas presented in this chapter, we pause briefly to introduce several of the basic concepts that underlie the mate-rial that follows in later sections.

4.2.1 Complex Numbers

A complex number, C, is defined as

C == R + jI (4.2-1)

where R and I are real numbers, and j is an imaginary number equal to the square of -1; that is, j = v=r. Here, R denotes the real part of the complex number and 1 its imaginary part. Real numbers are a subset of complex numbers in which I = O. The conjugate of a complex number C, denoted C*, is defined as

C· = R - j/ (4.2-2)

Complex numbers can be viewed geometrically as points in a plane (called the complex plane) whose abscissa is the real axis (values of R) and whose ordi-nate is the imaginary axis (values of I). That is, the complex number R + jl is point (R, I) in the rectangular coordinate system of the complex plane.

Sometimes, it is useful to represent complex numbers in polar coordinates,

C = ICI(cos e + jsin 0) ( 4.2-3)

where Ici = V R2 + 12 is the length of the vector extending from the origin of the complex plane to point (R, I), and 0 is the angle between the vector and the real axis. Drawing a simple diagram of the real and complex axes with the vec-tor in the first quadrant will reveal that tan 0 = (I I R) or 8 = arctan( I I R). The arctan function returns angles in the range [-n/2, n/2]. However, because 1 and R can be positive and negative independently, we need to be able to obtain angles in the full range [-n, n]. This is accomplished simply by keeping track of the sign of 1 and R when computing 8. Many programming languages do this automatically via so called four-quadrant arctangent functions. For example, MATLAB provides the function atan2 (Imag, Rea 1) for this purpose.

Using Euler's formula,

ejO == cos 0 + j sin 0 (4.2-4)

where e == 2.71828 ... , gives the following familiar representation of complex numbers in polar coordinates.

( 4.2-5)

Page 225: Digital image processing   3 e

4.2 • Preliminary Concepts 225

where lei and (j are as defined above. For example, the polar representation of the complex number 1 + j2 is V3ej

O, where 0 = 64.40 or 1.1 radians. The pre-ceding equations are applicable also to complex functions. For example, a complex function, F(u), of a variable u, can be expressed as the sum F(u) = R(u) + j/(u), where R(u) and /(u) are the real and imaginary compo-nent functions. As previously noted, the complex conjugate is F*(u) = R(u) - j/(u), the magnitude is IF(u)1 = VR(u)2 + J(u)2, and the angle is O(u) = arctan[/(u)j R(u»). We return to complex functions several times in the course of this and the next chapter.

4.2.2 Fourier Series As indicated in Section 4.1.1, a function f(t) of a continuous variable t that is pe-riodic with period, T, can be expressed as the sum of sines and cosines multiplied by appropriate coefficients. This sum, known as a Fourier series, has the form

(4.2-6)

where

I1T/2 .]"" Cn = - f(t)e-1r( dt

T -T/2 forn = 0,±1,±2, ... (4.2-7)

are the coefficients. The fact that Eq. (4.2-6) is an expansion of sines and cosines follows from Euler's formula, Eq. (4.2-4). We wil! return to the Fourier series later in this section.

4.2.3 Impulses and Their Sifting Property Central to the study of linear systems and the Fourier transform is the concept of an impulse and its sifting property. A unit impulse of a continuous variable t located at I = 0, denoted D(l), is defined as

D(t) = {: ift = 0 if t#-O

and is constrained also to satisfy the identity

1:D(I) dt = 1

(4.2-8a)

(4.2-8b)

Physically, if we interpret 1 as time, an impulse may be viewed as a spike of in-finity amplitude and zero duration, having unit area. An impulse has the so-called sifting property with respect to integration, 1:f (1)D(t) dt = f(O) (4.2-9)

provided that f(t) is continuous at t = 0, a condition typically satisfied in prac-tice. Sifting simply yields the value of the function f(t) at the location of the im-pulse (i.e., the origin, ( = 0, in the previous equation). A more general statement

An impulse is not a func-tion in the usual sense. A more accurate name is a distribution Of

generalized function. However, one often finds in the literature the names impulse function, delta function, and Dirac delta Junctjon. despite the misnomer.

To sifl means literally to separate. or to separate out by putting through a sieve.

Page 226: Digital image processing   3 e

226 Chapter 4 • Filtering in the Frequency Domain

FIGURE 4.2 A unit discrete impulse located at x = Xo. Variable x is discrete, and 0 is 0 everywhere except at x = Xo.

of the sifting property involves an impulse located at an arbitrary point to, denot-ed by S(t - to). In this case, the sifting property becomes 1:f (t)8(t - to) dt = f(to) (4.2-10)

which yields the value of the function at the impulse location, to. For instance, if f(t) = cos(t), using the impulse S(t - 7T) in Eq. (4.2-10) yields the result f(7T) = COS(7T) = -1. The power of the sifting concept will become quite evi-dent shortly.

Let x represent a discrete variable. The unit discrete impulse, o(x), serves the same purposes in the context of discrete systems as the impulse Set) does when working with continuous variables. It is defined as

Sex) = サセ@ x=o x=f.O

(4.2-11a)

Clearly, this definition also satisfies the discrete equivalent of Eq. (4.2-8b):

00

L 8(x) = 1 (4.2-11b) x=-oo

The sifting property for discrete variables has the form

00

L f(x)8(x) = f(O) (4.2-12) X=-CiO

or, more generally using a discrete impulse located at x = Xo,

00

L f(x)8(x - xo) = f(xo) (4.2-13) x=-OO

As before, we see that the sifting property simply yields the value of the func-tion at the location of the impulse. Figure 4.2 shows the unit discrete impulse diagrammatically. Unlike its continuous counterpart, the discrete impulse is an ordinary function.

Of particular interest later in this section is an impulse train, SilT(t), defined as the sum of infinitely many periodic impulses Ll T units apart:

00

su(t) = 2: oCt - nt:.T) (4.2-14) n:::::-(X)

S(x - xo)

Page 227: Digital image processing   3 e

4.2 .. Preliminary Concepts 227

. .. -3t..T -2t..T -t..T 0 t..T 2t..T 3t..T···

Figure 4.3 shows an impulse train. The impulses can be continuous or discrete.

4.2.4 The Fourier Transform of Functions of One Continuous Variable

The Fourier transform of a continuous function f(t) of a continuous variable, t, denoted ;s {f(t) }, is defined by the equation t

セサヲHエIス@ = QZヲHエI・MェセQtOjMエ@ dt (4.2-15)

where J-t is also a continuous variable. Because t is integrated out, セ@ {f(t)} is a function only of J-t. We denote this fact explicitly by writing the Fourier trans-form as ;:S{f(t)} = F(J-t); that is, the Fourier transform of f(t) may be written for convenience as

F(J-t) = 1:f (t)e-j21T/J-1 dt (4.2-16)

Conversely, given F(J-t), we can obtain f(t) back using the inverse Fourier transform,f(t) = ;:S-I{F(J-t)}, written as

f(t) = 1: F(J-t)ej27T/J-1 dJ-t (4.2-17)

where we made use of the fact that variable J-t is integrated out in the inverse transform and wrote simple f(t), rather than the more cumbersome notation f(t) = セMャサfHjMエIスN@ Equations (4.2-16) and (4.2-17) comprise the so-called Fourier transform pair. They indicate the important fact mentioned III Section 4.1 that a function can be recovered from its transform.

Using Euler's formula we can express Eq. (4.2-16) as

F(J-t) = 1:f (t) [ COS(21TJ-tt) - jsin(21TJ-tt)] dt (4.2-18)

tConditions for the existence of the Fourier transform are complicated to state in general (Champeney [1987]). but a sufficient condition for its existence is that the integral of the absolute value of f(O. or the integral of the square of f(t), be finite. Existence is seldom an issue in practice. except for idealized sig-nals. such as sinusoids that extend forever. These are handled using generalized impulse functions. OUT primary interest is in the discrete Fourier transform pair which. as you will see shortly, is guaranteed to exist for all finite functions.

FIGURE 4.3 An impulse train .

Page 228: Digital image processing   3 e

228 Chapter 4 • Filtering in the Frequency Domain

For consistency in termi· nology used in the previ-ous two chapters, and to be used later in this chapter in connection with ゥュ。ァ・セ@ we refer to the domain of variable I in general as the spatilll domain.

EXAMPLE 4.1: Obtaining the Fourier transform of a simple function.

[(I)

A

Wl2 0 wl2

abc

If f(t) is real, we see that its transform in general is complex. Note that the Fourier transform is an expansion of f{t) multiplied by sinusoidal terms whose frequencies are determined by the values of /L (variable t is integrated out, as mentioned earlier). Because the only variable left after integration is frequen-cy, we say that the domain of the Fourier transform is the frequency domain. We discuss the frequency domain and its properties in more detail later in this chapter. In our discussion, t can represent any continuous variable, and the units of the frequency variable j.t depend on the units of t. For example, if t rep-resents time in seconds, the units of /L are cycles/sec or Hertz (Hz). If t repre-sents distance in meters, then the units of j.t are cycles/meter, and so on. In other words, the units of the frequency domain are cycles per unit of the inde-pendent variable of the input function.

.. The Fourier transform of the function in Fig. 4.4(a) follows from Eq. (4.2-16):

100 lwI2 F(/L) = f(t)e-j21T

/-L1 dt = Ae-j21T/-LI dt -00 -WI2

= セ@ [ei1TJ.LW _ e-i1T/-LW] J21T/L

sin{1Tj.tW) = AW (1Tj.tW)

where we used the trigonometric identity sin () = (ejlJ - e-jO)/2j. In this case the complex terms of the Fourier transform combined nicely into a real sine

AW AW

FIGURE 4.4 (a) A simple function; (b) its Fourier transform; and (c) the spectrum. All functions extend to infinity in both directions.

Page 229: Digital image processing   3 e

4.2 • Preliminary Concepts 229

function. The result in the last step of the preceding expression is known as the sinc function:

. sin(1Tm) smc(m) == (1Tm) (4.2-19)

where sinc(O) == 1, and sinc(m) == 0 for all other integer values of m. Figure 4.4(b) shows a plot of F(p.,).

In general, the Fourier tNlnsform contains complex terms, and it is custom-ary for display purposes to work with the magnitude of the transform (a real quantity), which is called the Fourier spectrum or the frequency spectrum:

1 )1 _ ISin(1Tp..W)I F(p.. - AT (1Tp..W)

Figure 4.4(c) shows a plot of !F(p..)! as a function of frequency. The key prop-erties to note are that the locations of the zeros of both F(p..) and !F(p..)! are inversely proportional to the width, W, of the "box" function, that the height of the lobes decreases as a function of distance from the origin, and that the func-tion extends to infinity for both positive and negative values of p... As you will see later, these properties are quite helpful in interpreting the spectra of two-dimensional Fourier transforms of images. . •

• The Fourier transform of a unit impulse located at the origin follows from Eq. (4.2-16):

F(p..) = f:8(I)e-i27T!J-(dt

= f:e-i27T!J-t8(t)dt

= 1

where the third step follows from the sifting property in Eq. (4.2-9). Thus, we see that the Fourier transform of an impulse located at the origin of the spatial domain is a constant in the frequency domain. Similarly, the Fourier transform of an impulse located at t = to is

F(p..) = f:8(t - to)e-i27T!J-1dt

EXAMPLE 4.2: Fourier transform of an impulse and of an impulse train.

Page 230: Digital image processing   3 e

230 a.,ter 4 • Filtering in the Frequency Domain

where the third line follows from the sifting property in Eq. (4.2-10) and the last line follows from Euler's formula. These last two lines are equivalent rep-resentations of a unit circle centered on the origin of the complex plane.

In Section 4.3, we make use of the Fourier transform of a periodic im-pulse train. Obtaining this transform is not as straightforward as we just showed for individual impulses. However, understanding how to derive the transform of an impulse train is quite important, so we take the time to de-rive it in detail here. We start by noting that the only difference in the form of Eqs. (4.2-16) and (4.2-17) is the sign of the exponential. Thus, if a function f(t) has the Fourier transform F(J-L), then the latter function evaluated at t, that is, F(t), must have the transform f( -J-L). Using this symmetry property and given, as we showed above, that the Fourier transform of an impulse 8(t - to) is e-j27r!-lIO, it follows that the function e-j21T1o( has the transform 8( - J-L - to). By letting -to = a, it follows that the transform of e j27Tat is 8( -J-L + a) = 8(J-L - a), where the last step is true because 8 is not zero only when J.L = a, which is the same result for either 8( -J.L + a) or 8(J.L - a), so the two forms are equivalent.

The impulse train SAT(t) in Eq. (4.2-14) is periodic with period I1T, so we know from Section 4.2.2 that it can be expressed as a Fourier series:

where

1 1/lT12 . 2m,

en = - SAT(t)e-rs:r1dt I1.T -ATI2

With reference to Fig. 4.3, we see that the integral in the interval [-I1.Tj2, I1Tj2] encompasses only the impulse of S/lT(t) that is located at the origin. Therefore, the preceding equation becomes

1 1/lT12 ·hn Cn =- o(t)e-JjT1dt

I1T -/lTI2

1 = -eo

I1T

1

I1T

The Fourier series expansion then becomes

1 セP」ョャ@S.lT(t) = - ..::::... e i '7

I1T ョセMcxj@

Our objective is to obtain the Fourier transform of this expression. Becaus( summation is a linear process, obtaining the Fourier transform of a sum i:

Page 231: Digital image processing   3 e

4.2 • Preliminary Concepts 231

the same as obtaining the sum of the transforms of the individual compo-nents. These components are exponentials, and we established earlier in this example that

So, S(f-t), the Fourier transfQrm of the periodic impulse train StlT(t), is

= _1 :s{ セ@ ・ェセ[Giス@AT n=-OO

This fundamental result tells us that the Fourier transform of an impulse train with period AT is also an impulse train, whose period is 1/ AT. This inverse proportionality between the periods of SIlT(t) and S(f-t) is analogous to what we found in Fig. 4.4 in connection with a box function and its transform. This property plays a fundamental role in the remainder of this chapter. •

4.2.5 Convolution We need one more building block before proceeding. We introduced the idea of convolution in Section 3.4.2. You learned in that section that convolution of two functions involves flipping (rotating by 180°) one function about its origin and sliding it past the other. At each displacement in the sliding process, we perform a computation, which in the case of Chapter 3 was a sum of products. In the present discussion, we are interested in the convolution of two continu-ous functions, f(t) and h(t), of one continuous variable, t, so we have to use in-tegration instead of a summation. The convolution of these two functions, denoted as before by the operator *, is defined as

(4.2-20)

where the minus sign accounts for the flipping just mentioned, t is the displacement needed to slide one function past the other, and T is a dummy variable that is integrated out. We assume for now that the functions extend from -00 to 00.

We illustrated the basic mechanics of convolution in Section 3.4.2, and we will do so again later in this chapter and in Chapter 5. At the moment, we are

Page 232: Digital image processing   3 e

232 Chapter 4 II Filtering in the Frequency Domain

The same result would be obtained if the order of f(t) and /J(I) were reversed. so convolution is commutative.

interested in finding the Fourier transform of Eq. (4.2-20). We start with Eq. (4.2-15):

セ@ {J(t) * h(t)} 1:[I:J(T)h(t - T) ciT ]e-j21T1L1 dt

= l:J(T{ 1: h(t - T)e-j21TILI dt] dT

The term inside the brackets is the Fourier transform of h(t - T). We show later in this chapter that :J{h(t - T)} = H(IL)e- j21Tw, where H(IL) is the Fourier transform of h(t). Using this fact in the preceding equation gives us

Recalling from Section 4.2.4 that we refer to the domain of t as the spatial do-main, and the domain of IL as the frequency domain, the preceding equation tells us that the Fourier transform of the convolution of two functions in the spatial domain is equal to the product in the frequency domain of the Fourier transforms of the two functions. Conversely, if we have the product of the two transforms, we can obtain the convolution in the spatial domain by computing the inverse Fourier transform. In other words, J(t) * h(t) and H(u) F(u) are a Fourier transform pair. This result is one-half of the convolution theorem and is written as

(4.2-21)

The double arrow is used to indicate that the expression on the right is ob-tained by taking the Fourier transform of the expression on the left, while the expression on the left is obtained by taking the inverse Fourier transform of the expression on the right.

Following a similar development would result in the other half of the con-volution theorem: ...

J(t)h(t) セ@ H(JL) * F(JL) (4.2-22)

which states that convolution in the frequency domain is analogous to multi-plication in the spatial domain, the two being related by the forward and in-verse Fourier transforms, respectively. As you will see later in this chapter, the convolution theorem is the foundation for filtering in the frequency domain.

Page 233: Digital image processing   3 e

4.3 Ii Sampling and the Fourier Transform of Sampled Functions 233

Ell Sampling and the Fourier Transform of Sampled Functions

In this section, we use the concepts from Section 4.2 to formulate a basis for expressing sampling mathematically. This will lead us, starting from basic prin-ciples, to the Fourier transform of sampled functions.

セ@ Sampling ... Continuous functions have to be converted into a sequence of discrete values before they can be processed in a computer. This is accomplished by using sampling and quantization, as introduced in Section 2.4. In the following dis-cussion, we examine sampling in more detail.

With reference to Fig. 4.5, consider a continuous function, J(t), that we wish to sample at uniform intervals (11T) of the independent variable t. We

!(t)

... セ@ ...

o 5.".,(1)

• 1

-----'---11 1-,----,-1 l----'---Ll 1---'---1-11---'---1-1 l-L.....-L.....I 1 --,----------I . ,

I

• • •

... -2!:!.T -!:!.T 0 t.T 2!:!.T

I -; I

... ,

!(I)S/;!(I)

, , ,

... -2t.T -!:!.TO t.T2t.T

h = !(kt.T)

• • I • •

• I I I I -2 -1 () 1 2

• • • • • k

a b c d fiGURE 4.5 (a) A continuous function. (b) Train of impulses used to model the sampling process. (c) Sampled function formed as the product of (a) and (b). (d) Sample values obtained by integration and using the sifting prope.rty of the impulse. (The dashed line in (c) is shown for reference. It is not part of the data.)

Page 234: Digital image processing   3 e

234 Chapter 4 • Filtering in the Frequency Domain

Taking samples tlT units apart implies a sampling rate equal to 1/ tl T. If the units of tlT are seconds, then the sampling rate is in samples/s. If the units of tl T are meters, then the sampling rate is in samples/m. and so on.

assume that the function extends from -00 to 00 with respect to t. One way to model sampling is to multiply f(t) by a sampling function equal to a train of impulses !:1T units apart, as discussed in Section 4.2.3. That is,

00

!(t) = f(t)StlT(t) = セ@ f(t)8(t - n!:1T) (4.3-1) n=-oo

where f(t) denotes the sampled function. Each component of this summation is an impulse weighted by the value of f(t) at the location of the impulse, as Fig. 4.5(c) shows. The value of each sample is then given by the "strength" of the weighted impulse, which we obtain by integration. That is, the value, f k, of an arbitrary sample in the sequence is given by

h = 1:f (t)8(t - k!:1T) dt

= f(k!:1T) (4.3-2)

where we used the sifting property of 8 in Eq. (4.2-10). Equation (4.3-2) holds for any integer value k = ... , -2, -1,0,1,2, .... Figure 4.5(d) shows the re-sult, which consists of equally-spaced samples of the original function.

4.3.2 The Fourier Transform of Sampled Functions

Let F(IL) denote the Fourier transform of a continuous function f(t). As discussed in the previous section, the corresponding sampled function, f (t), is the product of f(t) and an impulse train. We know from the convolution theo-rem in Section 4.2.5 that the Fourier transform of the product of two functions in the spatial domain is the convolution of the transform§. of the two functions in the freguency domain. Thus, the Fourier transform, F(IL), of the sampled function f(t) is:

where, from Example 4.2,

F(IL) = セサWHエIス@

= セサjHエIsエャtHエIス@

= F(IL) * sHセI@( 4.3-3)

( 4.3-4)

Page 235: Digital image processing   3 e

4.3 • Sampling and the Fourier Transform of Sampled Functions 235

is the Fourier transform of the impulse train SilT(t). We obtain the convolution of F(IL) and S(IL) directly from the definition in Eg. (4.2-20):

F(IL) = F(IL) * S(IL)

= 1: F(r)S(1L - r) dr

= _1 100

F(r) セ@ 8(1L - r MセI@ dr AT -00 n=-OO AT

(4.3-5)

= _1 セ@ 100

F(r)8(1L - r MセI@ dr AT n=-OO -00 AT

100

( n) ]Mセf@ IL--ATn=-oo AT

where the final step follows from the sifting property of the impulse, as given in Eg. (4.2-10).

The summation in the last line of Eq. (4.3-5) shows that the Fourier transform F(IL) of the sampled function let) is an infinite, periodic sequence of copies of F(IL), the transform of the original, continuous function. The separation between copies is determined by the value of 1/ AT. Observe that although let) is a sampled function, its transform F(IL) is continuous because it consists of copies of F(IL) which is a continuous function. '

Figure 4.6 is a graphical summary of the preceding results.t Figure 4.6(a) is a sketch of the Fourier transform, F(IL), of a function J(t), and Fig. 4.6(b) shows the transform, F(IL), of the sampled function. As mentioned in the previous sec-tion, the quantity 1/ AT is the sampling rate used to generate the sampled func-tion. So, in Fig. 4.6(b) the sampling rate was high enough to provide sufficient separation between the periods and thus preserve the integrity of F(IL). In Fig. 4.6(c), the sampling rate was just enough to preserve F(IL), but in Fig. 4.6( d), the sampling rate was below the minimum required to maintain dis-tinct copies of F(IL) and thus failed to preserve the original transform. Figure 4.6(b) is the result of an over-sampled signal, while Figs. 4.6( c) and (d) are the results of critically-sampling and under-sampling the signal, respectively. These concepts are the basis for the material in the following section.

4.3.3 The Sampling Theorem We introduced the idea of sampling intuitively in Section 2.4. Now we consid-er the sampling process formally and establish the conditions under which a continuous function can be recovered uniquely from a set of its samples.

tPor the sake of clarity in illustrations, sketches of Pourier transforms in Fig. 4.6, and other similar figures in this chapter, ignore the fact that transforms typically are complex functions.

Page 236: Digital image processing   3 e

236 Chapter 4 • Filtering in the Frequency Domain

a b c d FIGURE 4.6 (a) Fourier transform of a band-limited function. (b)-(d) Transforms of the corresponding sampled function under the conditions of over-sampling, critically-sampling, and under-sampling, respectively.

F(f.L)

⦅セlィMMMNヲNl@o

F(f.L)

q66Lh66," -2//H -l/AT o l/AT 2/AT

-2/AT -l/AT o l/AT 2/AT

• f.L -3//.i.T -2/AT -l/AT o l/AT 2/tlT 3/tlT

A function f(t) whose Fourier transform is zero for values of frequencies out-side a finite interval (band) [-J-Lmax, J-Lmax] about the origin is called a band-limited function. Figure 4.7(a), which is a magnified section of Fig. 4.6(a), is such a func-tion. Similarly, Fig. 4.7(b) is a more detailed view of the transform of a critically-sampled function shown in Fig. 4.6(c). A lower value of 1/ t:.T would cause the periods in F(J-L) to merge; a higher value would provide a clean separation between the periods.

We can recover f(t) from its sampled version-if we can isolate a copy of F(J-L) from the periodic sequence of copies of this function contained in F(J-L), the transform of the sampled function f(t). Recall from the discussion in the previous section that F(J-L) is a continuous, periodic function with period 1/ t:.T. Therefore, all we need is one complete period to characterize the entire transform. This implies that we can recover f(t) from that single period by using the inverse Fourier transform.

Page 237: Digital image processing   3 e

4.3 • Sampling and the Fourier Transform of Sampled Functions 237

F(p.)

-1 0

2ilT 2ilT fj.T

Extracting from F(IL) a single period that is equal to F(J-L) is possible if the separation between copies is sufficient (see Fig. 4.6). In terms of Fig. 4.7(b), sufficient separation is guaranteed if 1/2t:,.T > ILmax or

1 tl. T > 2J-Lmax (4.3-6)

This equation indicates that a continuous, band-limited function can be re-covered completely from a set of its samples if the samples are acquired at a rate exceeding twice the highest frequency content of the function. This result is known as the sampling theorem.tWe can say based on this result that no in-formation is lost if a continuous, band-limited function is represented by sam-ples acquired at a rate greater than twice the highest frequency content of the function. Conversely, we can say that the maximum frequency that can be "captured" by sampling a signal at a rate 1/ t:,.T is J-Lmax = 1/2t:,.T. Sampling at the Nyquist rate sometimes is sufficient for perfect function recovery, but there are cases in which this leads to difficulties, as we illustrate later in Example 4.3. Thus, the sampling theorem specifies that sampling must exceed the Nyquist rate.

tThe sampling theorem is a cornerstone of digital signal processing theory. It was first formulated in 1928 by Harry Nyquist, a Bell Laboratories scientist and engineer. Claude E. Shannon, also from Bell Labs, proved the theorem formally in 1949. The renewed interest in the sampling theorem in the late 1940s was motivated by the emergence of early digital computing systems and modern communications, which created a need for methods dealing with digital (sampled) data.

a b FIGURE 4.7 (a) Transform of a band-limited function. (b) Transform resulting from critically sampling the same function.

A sampling rate equal to exactly twice the highest frequency is called the Nyquisf rale.

Page 238: Digital image processing   3 e

238 Chapter 4 • Filtering in the Frequency Domain

Nセ@

.C' FIGURE 4.8 Extracting one period of the transform of a band-limited function using an ideallowpass filter.

The AT in Eq. (4.3-7) cancels out the 1/ il Tin Eq. (4.3-5).

F(f.l.)

1\ L\"vh"/\ /\." -2/fH -l/IlT 0: l/IlT 2/IlT

I I

I H(f.l.) I

------f---Li AT!--+--i _. f.l. o :

I

F(f.l.) = H(f.l.)F(f.l.)

W __________________ セlMセ@ __ _1 ________________ セ@ __ • f.l.

-f.l.max 0 f.l.max

To see how the recovery of F(p-) from 'i(p-) is possible in principle, consider Fig. 4.8, which shows the Fourier transform of a function sampled at a rate slightly higher than the Nyquist rate. The function in Fig. 4.8(b) is defined by the equation

H(J-t) == サセt@ - J-tmax :s; J-t :s; J-tmax

otherwise (4.3-7)

When multiplied by the periodic sequence in Fig. 4.8(a), this function isolates the period centered on the origin. Then, as Fig. 4.8( c) shows, we obtain F(p-) by multiplying 'i(p-) by H(p-):

(4.3-8)

Once we have F(J-t) we can recover J(t) by using the inverse Fourier trans-form:

(4.3-9)

Equations (4.3-7) through (4.3-9) prove that, theoretically, it is possible to recover a band-limited function from samples of the function obtained at a rate exceeding twice the highest frequency content of the function. As we discuss in the following section, the requirement that f(t) must be band-limited implies in general that f(t) must extend from 00 to 00, a condition

Page 239: Digital image processing   3 e

4.3 II Sampling and the Fourier Transform of Sampled Functions 239

that cannot be met in practice. As you will see shortly, having to limit the du-ration of a function prevents perfect recovery of the function, except in some special cases.

Function H(p,) is called a lowpass filter because it passes frequencies at the low end of the frequency range but it eliminates (filters out) all higher fre-quencies. It is calJed also an ideallowpass filter because of its infinitely rapid transitions in amplitude (between 0 and 6.T at location -P,max and the reverse at P,max), a characteristic that cannot be achieved with physical electronic com-ponents. We can simulate ideal filters in software, but even then there are lim-itations, as we explain in Section 4.7.2. We will have much more to say about filtering later in this chapter. Because they are instrumental in recovering (re-constructing) the original function from its samples, filters used for the pur-pose just discussed are called reconstruction filters.

4.3.4 Aliasing A logical question at this point is: What happens if a band-limited function is sampled at a rate that is less than twice its highest frequency? This corresponds to the under-sampled case discussed in the previous section. Figure 4.9(a) is the same as Fig. 4.6( d), which illustrates this condition. The net effect of lower-ing the sampling rate below the Nyquist rate is that the periods now overlap, and it becomes impossible to isolate a single period of the transform, regard-less of the filter used. For instance, using the ideallowpass filter in Fig. 4.9(b) would result in a transform that is corrupted by frequencies from adjacent pe-riods, as Fig. 4.9(c) shows. The inverse transform would ,then yield a corrupted function of t. This effect, caused by under-sampling a function, is known as frequency aliasing or simply as aliasing. In words, aliasing is a process in which high frequency components of a continuous function "masquerade" as lower frequencies in the sampled function. This is consistent with the common use of the term alias, which means "a false identity."

Unfortunately, except for some special cases mentioned below, aliasing is always present in sampled signals because, even if the original sampled func-tion is band-limited, infinite frequency components are introduced the mo-ment we limit the duration of the function, which we always have to do in practice. For example, suppose that we want to limit the duration of a band-limited function f(t) to an interval, say [0, T]. We can do this by mUltiplying f(t) by the function

h(t) = サセ@ O:5t:5T

otherwise (4.3-10)

This function has the same basic shape as Fig. 4.4(a) whose transform, H(p,), has frequency components extending to infinity, as Fig. 4.4(b) shows. From the convolution theorem we know that the transform of the product of h(t)f(t) is the convolution of the transforms of the functions. Even if the transform of f(t) is band-limited, convolving it with H(p,), which involves sliding one function across the other, will yield a result with frequency

Page 240: Digital image processing   3 e

240 Gap • ., 4 • Filtering in the Frequency Domain

It b 'c

F(JL)

... セ@ ...

-3/IlT -2/tlT -l/IlT: 0 : l/tlT 2/tlT 3/tlT I I I I I H(JL) I

Mセゥ@ "I--+-I -'JL o

F(JL) = H(JL)F(JL)

セ@MMMMMMMMMMMMMMMMMMMMセMMセMMセMMMMMMMMMMMMMMMMMMMMMMMN@ JL - JLmax 0 JLmax

FIGURE 4.9 (a) Fourier transform of an under-sampled, band-limited function. (Interference from adjacent periods is shown dashed in this figure). (b) The same ideal lowpass filter used in Fig. 4.8(b). (c) The product of (a) and (b). The interference from adjacent periods results in aliasing that prevents perfect recovery of F(J-L) and, therefore, of the original, band-limited continuous function. Compare with Fig. 4.8.

components extending to infinity. Therefore, no function of finite duration can be band-limited. Conversely, a function that is band-limited must ex-tend from - 00 to 00. t

We conclude that aliasing ism inevitable fact of working with sampled records of finite length for the reasons stated in the previous paragraph. In practice, the effects of aliasing can be reduced by smoothing the input function to attenuate its higher frequencies (e.g., by defocusing in the case of an image). This process, called anti-aliasing, has to be done berare the function is sampled because aliasing is a sampling issue that cannot be "undone after the fact" using computational techniques.

t An important special case is when a function that extends from - 00 to 00 is band-limited and periodic. In this case, the function can he truncated and still be band-limited. provided that the truncation encompass-es exactly an integral number of periods. A single truncated period (and thus the function) can be repre-sented by a set of discrete samples satisfying the sampling theorem, taken over the truncated interval.

Page 241: Digital image processing   3 e

4.3 ll¥ Sampling and the Fourier Transform of Sampled Functions 241

• Figure 4.10 shows a classic illustration of aliasing. A pure sine wave extending infinitely in both directions has a single frequency so, obviously, it is band-limited. Suppose that the sine wave in the figure (ignore the large dots for now) has the equation sine 1Tt), and that the horizontal axis corresponds to time, t, in seconds. The function crosses the axis at t = ... -1,0, 1,2,3 ....

The period, P, of sine 1Tf) is 2 s, and its frequency is 1/ P, or 1/2 cycles/so According to the sampling theorem, we can recover this signal from a set of its samples if the sampling rate, ]/ /);'T, exceeds twice the highest frequency of the signal. This means that a sampling rate greater than 1 sample/s [2 x (1/2) = 1]. or /);. T < 1 s, is required to recover the signal. Observe that sampling this signal at exactly twice the frequency (1 sample/s), with sam-ples taken at t = ... -1,0,1,2,3 ... , results in ... sin(-1T),sin(O),sin(7T), sin(27T), ... , which are all O. This illustrates the reason why the sampling the-orem requires a sampling rate that exceeds twice the highest frequency, as mentioned earlier.

The large dots in Fig. 4.10 are samples taken uniformly at a rate of less than 1 sample/s (in fact, the separation between samples exceeds 2 s, which gives a sampling rate lower than 1/2 samples/s). The sampled signal looks like a sine wave, but its frequency is about one-tenth the frequency of the original. This sampled signal, having a frequency well below anything present in the original continuous function is an example of aliasing. Given just the samples in Fig. 4.10, the seriousness of aliasing in a case such as this is that we would have no way of knowing that these samples are not a true representation of the original function. As you will see in later in this chapter, aliasing in images can produce similarly misleading results. •

4.3.5 Function Reconstruction (Recovery) from Sampled Data

In this section, we show that reconstruction of a function from a set of its sam-ples reduces in practice to interpolating between the samples. Even the simple act of displaying an image requires reconstruction of the image from its samples

FIGURE 4.10 Illustration of aliasing. The under-sampled function (black dots) looks like a sine wave having a frequency much lower than the frequency of the continuous signal. The period of the sine wave is 2 s, so the zero crossings of the horizontal axis occur every second. 11 T is the separation between samples.

EXAMPLE 4.3: Aliasing.

Recall that 1 cycle!s is defined as 1 Hz,

Page 242: Digital image processing   3 e

242 Clllpter 4 • Filtering in the Frequency Domain

by the display medium. Therefore, it is important to understand the fundamen-tals of sampled data reconstruction. Convolution is central to developing this understanding, showing again the importance of this concept.

The discussion of Fig. 4.8 and Eq. (4.3-8) outlines the procedure for perfect recovery of a band-limited function from its samples using frequency domain methods. Using the convolution theorem, we can obtain the equivalent result in the spatial domain. From Eq. (4.3-8), F(J-t) = H(J-t)F(J-t), so it follows that

f(t) = セMiサfHjMエIス@

= セMャサhHjMエIfHjMエIス@

= h(t) * f(t) (4.3-11)

where the last step follows from the convolution theorem, Eq. (4.2-21). It can be shown (Problem 4.6) that substituting Eq. (4.3-1) for 1 (t) into Eq. (4.3-11) and then using Eq. (4.2-20) leads to the following spatial domain expression for f(t):

00

f(t) = 2: f(nilT)sinc[(t - nilT)/nilT] n=-OO

(4.3-12)

where the sinc function is defined in Eq. (4.2-19). This result is not unexpected because the inverse Fourier transform of the box filter, H (J-t), is a sinc function (see Example 4.1). Equation (4.3-12) shows that the perfectly reconstructed function is an infinite sum of sinc functions weighted by the sample values, and has the important property that the reconstructed function is identically equal to the sample values at multiple integer increments of il T. That is, for any t = kilT, where k is an integer, f(t) is equal to the kth sample f(kilT). This follows from Eq. (4.3-12) because sinc(O) = 1 and sinc(m) = 0 for any other integer value of m. Between sample points, values of f(t) are interpolations formed by the sum of the sinc functions.

Equation (4.3-12) requires an infinite number of terms for the interpola-tions between samples. In practice, this implies that we have to look for ap-proximations that are finite interpolations between samples. As we discussed in Section 2.4.4, the principal interpolation approaches used in image process-ing are nearest-neighbor, bilinear, and bicubic interpolation. We discuss the ef-fects of interpolation on images in Section 4.5.4.

III The Discrete Fourier Transform (OFT) of One Variable

One of the key goals of this chapter is the derivation of the discrete Fourier transform (DF!') starting from basic principles. The material up to this point may be viewed as the foundation of those basic principles, so now we have in place the necessary tools to derive the DFT.

Page 243: Digital image processing   3 e

4.4 • The Discrete Fourier Transform (OFT) of One Variable 243

4.4.1 Obtaining the OFT from the Continuous Transform of a Sampled Function

As discussed in Section 4.3.2, the Fourier transform of a sampled, band-limited function extending from -00 to 00 is a continuous, periodic function that also extends from - 00 to 00. In practice, we work with a finite number of samples, and the objective of this section is to derive the DFT corresponding to such sample sets.

Equation (4.3-5) gives tfie transform, F(p.) of sampled data in terms of the transform of the original function, but it does not give us an expression for F(p.) in terms of the sampled function l(t) itself. We find such an expression directly from the definition of the Fourier transform in Eq. (4.2-16):

F(p.) = 1:1(t)e-j21Tp.( dt

By substituting Eq. (4.3-1) for let), we obtain

00 100

ョセッッ@ -00 t(t)8(t - ョVtI・MェRセーNH@ dt

00 :L tn e-j21Tp.nAT

n=-OO

(4.4-1)

(4.4-2)

where the last step follows from Eq. (4.3-2). Although tn is a discrete function,

its Fourier F(p.) is continuous and infinitely periodic with period 1/ !::.T, as we

know from Eq. (4.3-5). Therefore, all we need to characterize F(J.I.) is one period,

and sampling one period is the basis for the DFT. _ Suppose that we want to obtain M equally spaced samples of F(p.) taken

over the period p. = 0 to P. = 1/ !::.T. This is accomplished by taking the sam-ples at the following frequencies:

m J.I. = M!::.T m = 0,1,2, ... , M-1 (4.4-3)

Substituting this result for J.I. into Eq. (4.4-2) and letting Fm denote the result yields

M-j

Fm = :L tne-j21Tmn/M

n=O m = 0,1,2, ... ,lv! セ@ 1 (4.4-4 )

Page 244: Digital image processing   3 e

244 Cllapter 4 • Filtering in the Frequency Domain

This expression is the discrete Fourier transform we are seeking. t Given a set {fn} consisting of M samples of f(t), Eq. (4.4-4) yields a sample set {F m} of M complex discrete values corresponding to the discrete Fourier transform of the input sample set. Conversely, given {F m}, we can recover the sample set {fn} by using the inverse discrete Fourier transform (10FT)

n = 0,1,2, ... , M - 1 (4.4-5)

It is not difficult to show (Problem 4.8) that substituting Eq. (4.4-5) for fn into Eq. (4.4-4) gives the identity F m == F m' Similarly, substituting Eq. (4.4-4) into Eq. (4.4-5) for Fm yields fn == fn- This implies that Eqs. (4.4-4) and (4.4-5) constitute a discrete Fourier transform pair. Furthermore, these identities in-dicate that the forward and inverse Fourier transforms exist for any set of samples whose values are finite. Note that neither expression depends ex-plicitly on the sampling interval IlT nor on the frequency intervals of Eq. (4.4-3). Therefore, the OFT pair is applicable to any finite set of discrete samples taken uniformly.

We used m and n in the preceding development to denote discrete variables because it is typical to do so for derivations. However, it is more intuitive, es-pecially in two dimensions, to use the notation x and y for image coordinate variables and u and v for frequency variables, where these are understood to be integers.tThen, Eqs. (4.4-4) and (4.4-5) become

and

M-j

F(u) 2:: f(x)e-j21TllX/M x=o

1 M-j f(x) = - 2:: F(u)ej21Ttlx/M

M u=o

u = 0, 1,2, ... , M - 1 ( 4.4-6)

x = 0, 1, 2, ... , M - 1 ( 4.4-7)

where we used functional notation instead of subscripts for simplicity. Clearly, F(u) == F m and f(x) == fn. From this point on, we use Eqs. (4.4-6) and (4.4-7) to denote the 1-0 OFT pair. Some authors include the 1/ M term in Eq. (4.4-6) instead of the way we show it in Eq. (4.4-7). That does not affect the proof that the two equations form a Fourier transform pair.

'Note from Fig. 4.6(b) that the interval [0, 1/.1 T] covers two back-fb-back half ーセイゥッ、ウ@ of the transform. This means that the data in Fm requires re-ordering to obtain samples that are ordered from the lowest the highest frequency of a period. This is the price paid for the notational convenience of taking the samples at m = 0,1, ... , M 1. instead of using samples on either side of the origin, which would re-quire the use of negative notation. The procedure to order the transform data is discussed in Section 4.6.3.

tWe have been careful in using I for cOnlinuous spatial variables and J.I. for the corresponding cOnlinuolls frequency variable. From this point on, we will use x and II to denote one-dimensional discrete spatial and frequency variables. respectivelv. When dealing with two-dimensional functions. we will use (I. z) and (/J.. v) to denote continuous spatial and frequency domain variables. respectively. Similarly. we will use (x,y) and (u. v) to denote their discrete counterparts.

Page 245: Digital image processing   3 e

4.4 • The Discrete Fourier Transform (DFT) of One Variable 245

It can be shown (Problem 4.9) that both the forward and inverse discrete transforms are infinitely periodic, with period M. That is,

F(u) = F(u + kM) ( 4.4-8)

and

.. [(x) = [(x + kM) (4.4-9)

where k is an integer. The discrete equivalent of the convolution in Eq. (4.2-20) is

M-J

[(x) * hex) = 2. [(m)h(x - m) (4.4-10) m=O

for x = 0, 1,2, ... , M - 1. Because in the preceding formulations the functions are periodic, their convolution also is periodic. Equation (4.4-10) gives one period of the periodic convolution. For this reason, the process inherent in this equation often is referred to as circular convolution, and is a direct result of the periodicity of the DFT and its inverse. This is in contrast with the convolution you studied in Section 3.4.2, in which values of the displacement, x, were deter-mined by the requirement of sliding one function completely past the other, and were not fixed to the range [0, M - 1] as in circular convolution. We discuss this difference and its significance in Section 4.6.3 and in Fig. 4.28.

Finally, we point out that the convolution theorem given in Eqs. (4.2-21) and (4.2-22) is applicable also to discrete variables (Problem 4.10).

4.4,2 Relationship Between the Sampling and Frequency Intervals

If [(x) consists of M samples of a function J(t) taken AT units apart, the duration of the record comprising the set V(x)}, x = 0,1,2, ... , M - 1, is

T = MA.T (4.4-11)

The corresponding spacing, Au, in the discrete frequency domain follows from Eq. (4.4-3):

1 1 Au = -- =-

MAT T (4.4-12)

The entire frequency range spanned by the M components of the DFT is

1 n = MAu =-AT

(4.4-13)

Thus, we see from Eqs. (4.4-12) and (4.4-13) that that the resolution in fre-quency, Au, of the DFT depends on the duration T over which the continuous function, J(r), is sampled, and the range of frequencies spanned by the DFT depends on the sampling interval I:!.T. Observe that both expressions exhibit inverse relationships with respect to T and AT.

It is not obvious why the discrete function f(x) should be periodic, con· sidering that the continuo ous function from which it was sampled may not be. One informal way to reason this out is to keep in mind that sampling reo sults in a periodiC DFf. It is logical that f(x). which is the inverse DFf. has to be periodic also for the DFf pair to exist.

Page 246: Digital image processing   3 e

246 Chapter 4 • Filtering in the Frequency Domain

EXAMPLE 4.4: The mechanics of computing the DFf.

.l b FIGURE 4.11 (a) A function, and (b) samples in the x-domain. In (a),tisa continuous variable; in (b).x represents intc;ger values.

• Figure 4.11(a) shows four samples of a continuous function, f(t), taken AT units apart. Figure 4.l1(b) shows the sampled values in the x-domain. Note that the values of x are 0, 1, 2, and 3, indicating that we could be referring to any four samples of f(t).

From Eq. (4.4-6), .

3

F(O) = 2:f(x) = [f(O) + f(l) + f(2) + f(3)] x=O

=1+2+4+4=11

The next value of F(u) is

3 F(l) = 2:f(x)e-j21r(1)x/4

x=O

Similarly, F(2) ::::: -(1 + OJ) and F(3) = -(3 + 2j). Observe that all values of f(x) are used in computing each term of F(u).

If instead we were given F(u) and were asked to compute its inverse, we would proceed in the same manner, but using the inverse transform. For instance,

f(O) = ! ±F(u)ei21TU(0) 4 u=O

1 3 = 2: F(u)

4 u=o

1 = - [11 - 3 + 2j - 1 - 3 - 2j] 4

1 = - [4] == 1 4

which agrees with Fig. 4.11(b). The other values of f(x) are obtained in a simi-lar manner. l!lfi

fit)

to to + 1 D. T to + 2.1 T to + 3.1 T

, , , , , I , I , ,

'-----'-----''----',--- x 2 3

Page 247: Digital image processing   3 e

4.5 • Extension to Functions of Two Variables 247

m Extension to Functions of Two Variables

In this section, we extend to two variables the concepts introduced in Sections 4.2 through 4.4.

4.5.1 The 2-D Impulse and Its Sifting Property

The impulse, 8(t, z), of two continuous variables, t and z, is defined as in Eq. (4.2-8): ..

8(t, z) = {:

and

ift=z=O otherwise

1:1: 8(t, z) dt dz = 1

(4.5-1a)

(4.5-1b)

As in the 1-D case, the 2-D impulse exhibits the sifting property under integration,

1:1: f(t, z)8(t, z)dt dz = f(O,O) (4.5-2)

or, more generally for an impulse located at coordinates (to, Zo),

(4.5-3)

As before, we see that the sifting property yields the value of the function f(t, z) at the location of the impulse.

For discrete variables x and y, the 2-D discrete impulse is defined as

8(x, y) = サセ@

and its sifting property is

00 00

if x = y = ° otherwise

2: 2: f(x, y)8(x, y) = f(O, 0) x=-oo y=-oo

(4.5-4)

( 4.5-5)

where f(x, y) is a function of discrete variables x and y. For an impulse located at coordinates (xo, Yo) (see Fig. 4.12) the sifting property is

00 00

2: 2: f(x, y)8(x - Xo, Y Yo) = f(xo, Yo) (4.5-6) x=-oo y=-oo

As before, the sifting property of a discrete impulse yields the value of the dis-crete function f(x, y) at the location of the impulse.

,'. 'f:

Page 248: Digital image processing   3 e

248 Chapter 4 III Filtering in the Frequency Domain

FIGURE 4.12 Two-dimensional unit discrete impulse. Variables x and yare discrete, and 8 is zero everywhere except at coordinates (xo. Yo)·

EXAMPLE 4.5: Obtaining the 2-D Fourier transform of a simple function.

8(x - Xo. Y - Yo)

x Y

The 2-D Continuous Fourier Transform Pair

Let f(t, z) be a continuous function of two continuous variables, t and z. The two-dimensional, continuous Fourier transform pair is given by the expressions

( 4.5-7)

and

( 4.5-8)

where J-L and v are the frequency variables. When referring to images, t and z are interpreted to be continuous spatial variables. As in the I-D case, the do-main of the variables J-L and v defines the continuous frequency domain.

Figure 4.13(a) shows a 2-D function analogous to the 1-D case in Example 4.1. Following a procedure similar to the one used in that example gives the result

ITI212/2 = .. Ae-j27r(/LI+vz) dt dz

. -T/2 -2/2

[ sine 1TJ-LT) J[ sine 1TVZ) ] = ATZ ----

(1TJ-LT) ,,(1TVZ)

The magnitude (spectrum) is given by the expression

I )1 _ jSin(1TJ-LT)ljsin(1TVZ)j IF(J-L, v - ATZ (1TJ-LT) 1 (1TVZ)

Figure 4.13(b) shows a portion of the spectrum about the origin. As in the 1-D case, the locations of the zeros in the spectrum are inversely proportional to

Page 249: Digital image processing   3 e

4.5 l!J Extension to Functions of Two Variables 249

a b fiGURE 4.13 (a) A 2-D function, and (b) a section of its spectrum (not to scale). The block is longer along the t-axis, so the spectrum is more "contracted" along the Waxis. Compare with Fig. 4.4.

the values of T and Z. Thus, the larger T and Z are, the more "contracted" the spectrum will become, and vice versa. セ@

4.),3 Two-Dimensional Sampling and the 2-D Sampling Theorem In a manner similar to the I-D case, sampling ih two dimensions can be mod-eled using the sampling function (2-D impulse train):

00 00

s!;TtJ.z(t, z) = 2: 2: o(t - mIJ.T, z - nIJ.Z) ( 4.5-9) m=-OC n=-OO

where AT and IJ.Z are the separations between samples along the t- and z-axis of the continuous function f(t, z). Equation (4.5-9) describes a set of periodic impulses extending infinitely along the two axes (Fig. 4.14). As in the l-D case illustrated in Fig. 4.5, multiplying f(t, z) by s!;Tt;z(t, z) yields the sampled function.

Function f(t, z) is said to be band-limited if its Fourier transform is 0 out-side a rectangle established by the intervals [- J-tmax, J-tmax] and [-vmax• vIllax ]:

that is,

F(J-t, v) = 0 for I/LI ;::: J-tmax and Ivi ;::: Vmax (4.5-10)

The two-dimensional sampling theorem states that a continuous, band-limited function f(t, z) can be recovered with no error from a set of its samples if the sampling intervals are

and

1 AT<--

2/Lmax

1 ijNz\MセM

2vmax or, expressed in terms of the sampling rate, if

(4.5-11)

2)

Page 250: Digital image processing   3 e

250 Chapter 4 • Filtering in the Frequency Domain

FIGURE 4.14 1Wo-dimensional impulse train.

a b

FIGURE 4.15 Two-dimensional Fourier transforms of (a) an over-sampled, and (b) under-sampled band-limited function.

SIl.T.l.z(t, z)

z

1 !::.T > 2/-Lrnax (4.5-13)

and

1 !::.Z > 2vrnax (4.5-14)

Stated another way, we say that no information is lost if a 2-D, band-limited, con-tinuous function is represented by samples acquired at rates greater than twice the highest frequency content of the function in both the wand v-directions.

Figure 4.15 shows the 2-D equivalents of Figs. 4.6(b) and (d). A 2-D ideal box filter has the form illustrated in Fig. 4.B(a). The dashed portion of Fig. 4.15(a) shows the location of the filter to achieve the necessary isolation of a single pe-riod of the transform for reconstruction of a band-limited function from its sam-ples, as in Section 4.3.3. From Section 4.3.4, we know that if the function is under-sampled the periods overlap, and it becomes impossible to isolate a single period, as Fig. 4.l5(b) shows. Aliasing would result under such conditions.

4.5.4 Aliasing in Images In this section, we extend the concept of aliasing to images and discuss several aspects related to image sampling and resampiing.

fLmax

Footprint of an

r ideallowpass (box) filter

-.... . ..

Page 251: Digital image processing   3 e

4.5 " Extension to Functions of Two Variables 251

Extension from I-D aliasing

As in the I-D case, a continuous function f(t, z) of two continuous variables,t and Z, can be band-limited in general only if it extends infinitely in both coordinate di-rections. The very act of limiting the duration of the function introduces corrupting frequency components extending to infinity in the frequency domain, as explained in Section 4.3.4. Because we cannot sample a function infinitely, aliasing is always present in digital images, j'tst as it is present in sampled I-D functions. There are two principal manifestations of aliasing in images: spatial aliasing and temporal aliasing. Spatial aliasing is due to under-sampling, as discussed in Section 4.3.4. Temporal aliasing is related to time intervals between images in a sequence of im-ages. One of the most common examples of temporal aliasing is the "wagon wheel" effect, in which wheels with spokes in a sequence of images (for example, in a movie) appear to be rotating backwards. This is caused by the frame rate being too low with respect to the speed of wheel rotation in the sequence.

Our focus in this chapter is on spatial aliasing. The key concerns with spatial aliasing in images are the introduction of artifacts such as jaggedness in line features, spurious highlights, and the appearance of frequency patterns not pre-sent in the original image. The following example illustrates aliasing in images .

• Suppose that we have an imaging system that is perfect, in the sense that it is noiseless and produces an exact digital image of what it sees, but the number of samples it can take is fixed at 96 X 96 pixels. If we use this system to digitize checkerboard patterns, it will be able to resolve patterns that are up to 96 X 96 squares, in which the size of each square is 1 セ@ 1 pixels. In this limit-ing case, each pixel in the resulting image will correspond to one square in the pattern. We are interested in examining what happens when the detail (the size of the checkerboard squares) is less than one camera pixel; that is, when the imaging system is asked to digitize checkerboard patterns that have more than 96 X 96 squares in the field of view.

Figures 4.l6(a) and (b) show the result of sampling checkerboards whose squares are of size 16 and 6 pixels on the side, respectively. These results are as expected. However, when the size of エィセ@ squares is reduced to slightly less than one camera pixel a severely aliased image results, as Fig. 4.16(c) shows, Finally, reducing the size of the squares to slightly less than 0.5 pixels on the side yielded the image in Fig. 4.16(d). In this case, the aliased result looks like a normal checkerboard pattern. In fact, this image would result from sampling a checker-board image whose squares were 12 pixels on the side. This last image is a good reminder that aliasing can create results that may be quite misleading. 1£

The effects of aliasing can be reduced by slightly defocusing the scene to be digitized so that high frequencies are attenuated. As explained in Section 4.3.4, anti-aliasing filtering has to be done at the "front-end," before the image is sampled. There are no such things as after-the-fact software anti-aliasing filters that can be used to reduce the effects of aliasing caused by violations of the sampling theorem. Most commercial digital image manipulation packages do have a feature called "anti-aliasing." However. as illustrated in Examples 4.7

EXAMPLE 4.6: Aliasing in images.

This example shOUld not be construed as being un-realistic. Sampling a "perfect" scene under noiseless, distortion-free conditions is common when converting computer-generated models and vector drawings to digital images.

" i'

Page 252: Digital image processing   3 e

252 Cllapter 4 • Filtering in the Frequency Domain

セZ@ .

FIGURE 4.16 Aliasing in images. In (a) and (b), the lengths of the sides of the squares are 16 and 6 pixels, respectively, and aliasing is visually negligible. In (c) and (d). the sides of the squares are 0.9174 and 0.4798 pixels, respectively, and the results show significant aliasing. Note that (d) masquerades as a "normal" image.

and 4.8, this term is related to blurring a digital image to reduce additional aliasing artifacts caused by resampling. The term does not apply to reducing aliasing in the original sampled image. A significant number of commercial digital cameras have true anti-aliasing filtering built in, either in the lens or on the surface of the sensor itself. For this reason, it is difficult to illustrate alias-ing using images obtained with such cameras.

Image interpolation and resampling

As in the I-D case, perfect reconstruction of a band-limited image function from a set ofits samples requires 2-D convolution in the spatial domain with a sinc function. As explained in Section 4.3.5, this theoretically perfect recon-struction requires interpolation using infinite summations which, in practice, forces us to look for approximations. One of the most common applications of 2-D interpolation in image processing is in image resizing (zooming and shrinking). Zooming may be viewed as over-sampling, while shrinking may be viewed as under-sampling. The key difference between these two operations and the sampling concepts discussed in previouss.ections is that zooming and shrinking are applied to digital images.

Interpolation was explained in Section 2.4.4. Our interest there was to illus-trate the performance of nearest and bicubic In this section, we some additional examples with a focus on sampling and anti-aliasing issues. A. special case or nearest interpolation that ties in nicely with over-sampling is which is when we want to

Page 253: Digital image processing   3 e

4.5 Extension to Functions of Two Variables

instance, to double the size of an we duplicate each column. This dou-bles the image size in the horizontal direction. Then, we duplicate each row of the enlarged image to double the size in the vertical direction. The same pro-cedure is used to enlarge the image any integer number of times. TI1e intensity-level assignment of each pixel is predetermined by the fact that new locations are exact duplicates of old locations.

Image shrinking is done in a manner similar to zooming. Under-sampling is achieved by row-column geletion (e.g .. to shrink an image by one-half, we delete every other row and column). We can use the zooming grid analogy in Section 2.4.4 to visualize the concept of shrinking by a non-integer factor, ex-cept that we now expand the grid to fit over the original image, do intensity-level interpolation, and then shrink the grid back to its specified size. To reduce aliasing, it is a good idea to blur an image slightly before shrinking it (we discuss frequency domain blurring in Section 4.8). An alternate technique is to super-sample the original scene and then reduce (resample) its size by row and col-umn deletion. This can yield sharper results than with smoothing, hut it clearly requires access to the original scene. Clearly, if we have no access to the original scene (as typically is the case in practice) super-sampling is not an option .

• The effects of aliasing generally are worsened when the size of a digital image is reduced. Figure 4.17(a) is an image purposely created to illustrate the effects of aliasing (note the thinly-spaced parallel lines in all garments worn by the subject). There are no objectionable artifacts in Fig. 4.17( a), indicating that

GゥセNZゥB@FIGURE 4.17 Illustration of aliasing on (b) Result of resizing the image to 5()Lfc of (c) Result of blurring the in (a) \vith a more blurred than (b). hut Compression Lahorator\_ I

Tl1t process of イ」ウ。ュセ@piing an image \vithout using band-limiting blur-ring is cailed tit'cimalion.

EXAMP},E 4.7: Illustration of aliasing in resampJed

Page 254: Digital image processing   3 e

254 Chapter 4 II Filtering in the Frequency Domain

EXAMPLE 4.8: Illustration of jaggies in image shrinking.

'iii.rei:

the sampling rate used initially was sufficient to avoid visible aliasing. In Fig. 4.17(b), the image was reduced to 50% of its original size using row-column deletion. The effects of aliasing are quite visible in this image (see, for example the areas around the subject's knees). The digital "equivalent" of anti-aliasing filtering of continuous images is to attenuate the high fre-quencies of a digital image by smoothing it before resampling. Figure 4.17(c) shows the result of smoothing the image in Fig. 4.17(a) with a 3 x 3 averaging filter (see Section 3.5) before reducing its size. The improvement over Fig. 4.l7(b) is evident. Images (b) and (c) were resized up to their orig-inal dimension by pixel replication to simplify comparisons. 1'\

When you work with images that have strong edge content, the effects of aliasing are seen as block-like image components, called jaggies. The following example illustrates this phenomenon .

• Figure 4.18(a) shows a 1024 X 1024 digital image of a computer-generated scene in which aliasing is negligible. Figure 4.18(b) is the result of reducing the size of (a) by 75% to 256 X 256 pixels using bilinear interpolation and then using pixel replication to bring the image back to its original size in order to make the effects of aliasing (jaggies in this case) more visible. As in Example 4.7, the effects of aliasing can be made less objectionable by smoothing the image before resampling. Figure 4.18(c) is the result of using a 5 X 5 averaging filter prior to reducing the size of the image. As this figure shows, jaggies were reduced significantly. The size reduction and increase h)

the original size in Fig. 4.18(c) were done using the same approach used to generate Fig. 4.18(b).

FIGURE 4.18 Illustration of jaggies. (a) 1024 x 1024 ,",'n""q'0" scene with negligible visible aliasing. (b) Result of (a) to (c) Result of blurring the image in (a) with a 5 .A.

interpolation. (Original image eoul'esy of n

interpolation. bilinear

Page 255: Digital image processing   3 e

4.5 Extension to Functions of Two Variables 255

• In the previous two examples, we used pixel replication to zoom the small resampled images. This is not a preferred approach in general, as Fig. 4.19 il-lustrates. Figure 4.19(a) shows a 1024 X 1024 zoomed image generated by pixel replication from a 256 X 256 section out of the center of the image in Fig. 4.18(a). Note the "blocky" edges. The zoomed image in Fig. 4.19(b) was generated from the same 256 X 256 section, but using bilinear interpolation. The edges in this result are considerably smoother. For example, the edges of the bottle neck and the lal'ge checkerboard squares are not nearly as blocky in (b) as they are in (a). III

Moire patterns

Before leaving this section, we examine another type of artifact, called moire patterns,t that sometimes result from sampling scenes with periodic or nearly periodic components. In optics, moire patterns refer to beat patterns pro-duced between two gratings of approximately equal spacing. These patterns are a common everyday occurrence. We see them, for example, in overlapping insect window screens and on the interference between TV raster lines and striped materials. In digital image processing, the problem arises routinely when scanning media print, such as newspapers and magazines, or in images with periodic components whose spacing is comparable to the spacing be-tween samples. It is important to note that moire patterns are more general than sampling artifacts. For instance, Fig. 4.20 shows the moire effect using ink drawings that have not been digitized. Separately, the patterns are clean and void of interference. However, superimposing one pattern on the other creates

a b

FIGURE 4.19 Image zooming. (a) A 1024 x 1024 digital replication from a 256 X 256 image extracted from (b) Image generated using bi-linear interpolation, jaggies.

tThe term moire is a French word (nol the name of it person) that appear, to ilave originated witb weavers who first noticed interference patterns visihle on some tahrics: the term reloted on the word mohair. a cloth made from Angola goat

EXAMPLE 4.9: Illustration of jaggies in image zooming.

Page 256: Digital image processing   3 e

256 Chapter 4 • Filtering in the Frequency Domain

.1;'i: セャ[[ャ@

FIGURE 4.20 Examples of the moire effect. These are ink drawings, not digitized patterns. Superimposing one pattern on the other is equivalent mathematically to multiplying the patterns.

Color printing uses red. green. and blue dots to produce the sensation in the eye of continuous color.

fiGURE 4.21 A newspaper image of size 246 X 168 pixels sampled at 75 dpi showing a moire pattern. The moire pattern in this image is the interference pattern created between the ±45° orientation of the halftone dots and the north-south orientation of the sampling grid used to digitize the image.

........... ..... ..... . .. .. セNNNN@ .. ........ .. ........... .. ...... .. ..... ..... . ..... ..... . ........... ..... ..... . ..... ..... . ..... ...... . ........... ..... ..... . ..... ..... . . ......... . .. ...... . ..... ..... . ..... ..... . ..... ..... . ..... ..... . ..... ..... . ..... ..... . .. ...... . ..... ..... . ..... ..... . ..... ..... . .. ...... . ..... ..... . .. ...... . ..... ..... . .. ...... . ..... ..... . ..... ..... . .. ...... . ..... ..... . .. ...... . ..... ..... . ..... ..... .

a beat pattern that has frequencies not present in either of the original ー。エセ@terns. Note in particular the moire effect produced by two patterns of dots, as this is the effect of interest in the following discussion.

Newspapers and other printed materials make use of so called halftone dots, which are black dots or ellipses whose sizes and various joining schemes are used to simulate gray tones. As a rule, the following numbers are typical: newspapers are printed using 75 halftone dots per inch (dpi for short), ュ。ァ。セ@zines use 133 dpi, and high-quality brochures use 175 dpi. Figure 4.21 shows

Page 257: Digital image processing   3 e

4.S Extension to Functions of Two Variables 257

what happens when a newspaper image is sampled at 75 dpi. The sampling lat-tice (which is oriented vertically and horizontally) and dot patterns on the newspaper image (oriented at ±45C

) interact to create a uniform moire pat-tern that makes the image look blotchy. (We discuss a technique in Section 4.10.2 for reducing moire interference patterns.)

As a related point of interest. Fig. 4.22 shows a newspaper image sam-pled at 400 dpi to avoid moire effects. The enlargement of the region sur-rounding the subject's lett eye illustrates how halftone dots are used to create shades of gray. The dot size is inversely proportional to image inten-sity. In light areas, the dots are small or totally absent (see, for example, the white part of the eye). In light gray areas, the dots are larger, as shown below the eye. In darker areas, when dot size exceeds a specified value (typ-ically 50%), dots are allowed to join along two specified directions to form an interconnected mesh (see, for example, the left part of the eye). In some cases the dots join along only one direction, as in the top right area below the eyebrow.

The 2-D Discrete Fourier Transform and Its Inverse

A development similar to the material in Sections 4.3 and 4.4 would yield the following 2-D discrete Fourier fram/urm (DFT):

M--1 N-J

F(u, v) = 2: 2: lex, y)e- j2rr (lix/M+l'V!!\) (4.5-15) x=O r=()

where f(x, y) is a digital image of size M x N. As in the 'J -D case, Eq. (4.5-15) must be evaluated for values of the discrete variables 1I and v in the ranges u = 0, 1, 2, ... , M - 1 and v = 0, I, 2, . , .. N - 1 .f

lAs mentioned in Section 4.4.1. keep in mind thai in Ihis l'hapln we the (I, ) and (v' 1,1 In denoie '}D cOlltilluous spatial and frequency-uomain variables, III the 2-D diltTi'lC C;"e, we us,' (\ It for spiltiai variables and (u. v) for frequency-domain variable"

sッュ」ャゥュ・セ@ you wiil find in the lilcrnlure the lIMN cnnSlnnt in front of DFf instead of the IDf-TAt timeS,lhe COil·

stant ゥ_⦅セクイイ」ウウ」、@ as J/\IMN and is included in front of the forward £.lnd Inverse transforms, thus crcMing a more symmetric pair. Any oj

thc:-;e formula1ions b. cor-recL provided t ha I セ@ ou are consistent.

FIGURE 4.22 A newspaper image and all enlargement showing how halftone dots are arranged to render shades of gray.

Page 258: Digital image processing   3 e

258 Cllapter 4 • Filtering in the Frequency Domain

Given the transform F(u, v), we can obtain f(x, y) by using the inverse dis-crete Fourier transform (IDFT):

1 M-] N-]

f(x, y) = -- セ@ セ@ F(u, v)ej21r(lIx/M+lJy/N)

MN u=o v=o (4.5-16)

for x =' 0,1,2, ... , M - 1 and y = 0,1,2, ... , N -1. Equations (4.5-15) and (4.5-16) constitute the 2-D discrete Fourier transform pair. The rest of this chapter is based on properties of these two equations and their use for image filtering in the frequency domain.

III Some Properties of the 2-D Discrete Fourier Transform

In this section, we introduce several properties of the 2-D discrete Fourier transform and its inverse.

4.6.1 Relationships Between Spatial and Frequency Intervals The relationships between spatial sampling and the corresponding frequency-domain intervals are as explained in Section 4.4.2. Suppose that a continuous function f(t, z) is sampled to form a digital image, f(x, y), consisting of M x N samples taken in the (- and z-directions, respectively. Let tlT and tlZ denote the separations between samples (see Fig. 4.14). Then, the separations between the corresponding discrete, frequency domain variables are given by

1 (4.6-1) tlu = ----

MtlT

and

1 tlv= NtlZ (4.6-2)

respectively. Note that the separations between samples in the frequency do-main are inversely proportional both to the spacing between spatial samples and the number of samples.

4.6.2 Translation and Rotation It can be shown by direct substitution into eセN@ (4.5-15) and (4.5-16) that the Fourier transform pair satisfies the following translation properties (Problem 4. J 6):

( 4.6-3)

and

(4.6-4 )

Page 259: Digital image processing   3 e

4.6 II Some Properties of the 2-D Discrete Fourier Transform 259

That is, multiplying f(x, y) by the exponential shown shifts the origin of the OFT to (uo, vo) and, conversely, mUltiplying F(u, v) by the negative of that exponential shifts the origin of f(x, y) to (xo, Yo). As we illustrate in Example 4.13, translation has no effect on the magnitude (spectrum) of F(u, v).

Using the polar coordinates

x = r cos 0 ., = r sin 0 u = w cos!.p v == w sin !.p

results in the following transform pair:

f(r,O + 00) ¢=> F(w,!.p + 00) (4.6-5)

which indicates that rotating f(x, y) by an angle 00 rotates F(u, v) by the same angle. Conversely, rotating F(u, v) rotates f(x, y) by the same angle.

4.6.3 Periodicity As in the 1-0 case, the 2-0 Fourier transform and its inverse are infinitely pe-riodic in the u and v directions; that is,

and

f(x, y) = f(x + kjM, y) = f(x, y + k2N) = f(x + kIM, y + k2N) (4.6-7)

where k j and k2 are integers. The periodicities of the transform and its inverse are important issues in

the implementation of OFT-based algorithms. Consider the 1-0 spectrum in Fig. 4.23(a). As explained in Section 4.4.1, the transform data in the interval from 0 to M 1 consists of two back-to-back half periods meeting at point M12. For display and filtering purposes, it is more convenient to have in this interval a complete period of the transform in which the data are contiguous, as in Fig. 4.23(b). It follows from Eq. (4.6-3) that

f(x)e j27r(lInXIM) ¢=> F(u - uo)

In other words, multiplying f(x) by the exponential term shown shifts the data so that the origin, F(O), is located at uo. If we let Uo = M12, the exponential term becomes ej7Tx which is equal to (-1)"' because x is an integer. In this case,

That is, multiplying f(x) by (-1)-' shifts the data so that F(O) is at the center of the interval [0, M - 1], which corresponds to Fig. 4.23(b), as desired.

In 2-D the situation is more difficult to graph, but the principle is the same, as Fig. 4.23( c) shows. Instead of two half periods, there are now four quarter periods meeting at the point (M /2, N /2). The dashed rectangles correspond to

Page 260: Digital image processing   3 e

260 Chapter 4 • Filtering in the Frequency Domain

a b c d

FIGURE 4.23 Centering the Fourier transform. (a) A I-D DFf showing an infinite number of periods. (b) Shifted DFf obtained by multiplying I(x) by (-lY before computing F(u). (c) A 2-D DFf showing an infinite number of periods. The solid area is the M X N data array, F(u, v), obtained with Eq. (4.5-15). This array consists of four quarter periods. (d) A Shifted DFT obtained by mUltiplying I(x, y) by (-1)x+y before computing F(u, v). The data now contains one complete, centered period, as in (b).

F(u)

'. Two back-to-back periods meet here.

.' .

" ....... : ..... '. セ@ ". .... ..". セセセセセセMMセセセセセセセセセセKMMセᄋMMセᄋᄋセᄋMᄋセᄋセᄋᄋセMオ@

'--M/2 0 M/2-1 J '--M/2 J '--M M-I

F(u)

. '. •• [.. Two back-to-back .... .... periods meet here. .:

....... " ........... : ............. t ............. : KMMMMMMMセセセMMセセセMJMMセセMMセセセセMMセMMMMオ@

o '-M /2 '-M - 1 1- One period (M samples) --I

⦅セ@ _________ L _________ セ⦅@I I I

I I I MセMMMMMMMMMセMMMMMMMMMセM

I I I I I I I I I I I I I I I I I I I I I

I I I I I I I I I I I I I I I I I I I I I I I I I I I (0,0) --.....IN/2-..... N - 1,1 -i--------- I v MセMMMM MMMMセMMMM MMMMセMI I I I I I

: : F(u, v) : I I

: :Four back-to-back: : : periods meet here.:

MセMMMMMMMMMセMMMMMMMMMセMI I I

I I I M/2--..... I -I I I I M - 1 --..... F(u, v)

- i - ---- MfセZZ「。ZォMエッM「。」ォ@ )1 periods meet here.

lJ = Periods of the DFT.

D = M x N data array, F(II, v).

the infinite number of periods of the 2-D DFT. As in the I-D case, visualization is simplified if we shift the data so that F(O, 0) is at (MI2, NI2). Letting (uo, vol = (MI2, N12) in Eq. (4.6-3) results in the expression

f(x,y)(-ly+V<:=>F(u - M12,v N12) ( 4.6-8)

Using this equation shifts the data so that F(O, 0) is at the center of the frequency rectangle defined by the intervals [0, M -- 1] and [0, N - 1], as desired. Figure 4.23(d) shows the result. We illustrate these concepts later in this section as part of Example 4.11 and Fig. 4.24.

Page 261: Digital image processing   3 e

4.6 III Some Properties of the 2-D Discrete Fourier Transform 261

4.6.4 Symmetry Properties An important result from functional analysis is that any real or complex func-tion, w(x, y), can be expressed as the sum of an even and an odd part (each of which can be real or complex):

W(x, y) = we(x, y) + wo(x, y) (4.6-9)

where the even and odd parts are defined as

( ) A. w(x,y) + we-x, -y)

We x, y - 2 (4.6-10a)

and

( ) A. w(x, y) - we-x, -y)

Wo x, y - 2 (4.6-lOb)

Substituting Eqs. (4.6-1Oa) and (4.6-lOb) into Eq. (4.6-9) gives the identity w(x, y) == w(x, y), thus proving the validity of the latter equation. It follows from the preceding definitions that

(4.6-11a)

and that

(4.6-11b)

Even functions are said to be symmetric and odd functions are antisymmetric. Because all indices in the DFT and IDFT are positive, when we talk about symmetry (antisymmetry) we are referring to symmetry (antisymmetry) about the center point of a sequence. In terms of Eq. (4.6-11), indices to the right of the center point of a 1-D array are considered positive, and those to the left are considered negative (similarly in 2-D). In our work, it is more convenient to think only in terms of nonnegative indices, in which case the definitions of evenness and oddness become:

weCx, y) = weCM - x, N - y) (4.6-12a)

and

(4.6-12b)

where, as usual, M and N are the number of rows and columns of a 2-D array.

Page 262: Digital image processing   3 e

262 Chapter 4 III Filtering in the Frequency Domain

To convince yourself that the samples of an odd function sum to zero,

sketch one period of a 1-0 sine wave about the origin or any other inter-val spanning one period.

EXAMPLE 4.10: Even and odd functions.

We know from elementary mathematical analysis that the product of two even or two odd functions is even, and that the product of an even and an odd function is odd. In addition, the only way that a discrete function can be odd is if all its samples sum to zero. These properties lead to the important result that

M-j N-l

セ@ セ@ wix, y)w,,(x, y) = 0 (4.6-13) x=o y=O

for any two discrete even and odd functions We and woo In other words, be-cause the argument of Eq. (4.6-13) is odd, the result of the summations is O. The functions can be real or complex.

II Although evenness and oddness are visualized easily for continuous func-tions,lhese concepts are not as intuitive when dealing with discrete sequences. The following illustrations will help clarify the preceding ideas. Consider the 1-D sequence

[ = {J(O) [(I) [(2) f(3)}

={211l}

in which M = 4. To test for evenness, the condition f(x) = f(4 - x) must be satisfied; that is, we require that

f(O) = [(4), f(2) = f(2), f(l) = f(3), f(3) = f(l)

Because f(4) is outside the range being examined, and it can be any value, the value of f(O) is immaterial in the test for evenness. We see that the next three conditions are satisfied by the values in the array, so the sequence is even. In fact, we conclude that any 4-point even sequence has to have the form

{a b c b}

That is, only the second and last points must be equal in a 4-point even se-quence.

An odd sequence has the interesting property !-hat its first term, welO, 0), is always 0, a fact that follows directly from Eq. (4.6-lOb). Consider the 1-0 se-quence

g = {g(O) g(1) g(2) g(3)}

{O -1 0 I}

Page 263: Digital image processing   3 e

4.6 • Some Properties of the 2-D Discrete Fourier Transform 263

We easily can confirm that this is an odd sequence by noting that the terms in the sequence satisfy the condition g(x) = - g( 4 - x). For example, g(l) = -g(3). Any 4-point odd sequence has the form

{O -b 0 b}

That is, when M is an even number, a 1-D odd sequence has the property that the points at locations 0 and MI2 always are zero. When M is odd, the first term still has to be 0, but the remaining terms form pairs with equal value but opposite sign.

The preceding discussion indicates that evenness and oddness of sequences depend also on the length of the sequences. For example, we already showed that the sequence {O -1 0 I} is odd. However, the sequence {O -1 0 1 O} is neither odd nor even, although the "basic" structure ap-pears to be odd. This is an important issue in interpreting DFf results. We show later in this section that the DFTs of even and odd functions have some very important characteristics. Thus, it often is the case that understanding when a function is odd or even plays a key role in our ability to interpret image results based on DFTs.

The same basic considerations hold in 2-D. For example, the 6 x 6 2-D se-quence

0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 0

0 0 -2 0 2 0

0 0 -1 0 1 0 0 0 0 0 0 0

is odd. However, adding another row and column of Os would give a result that is neither odd nor even. Note that the inner structure of this array is a Sobel mask, as discussed in Section 3.6.4. We return to this mask in Example 4.15. II

Armed with the preceding concepts, we can establish a number of important symmetry properties of the DFT and its inverse. A property used frequently is that the Fourier transform of a real function, I(x, y), is conjugate symmetric:

F*(u, v) = F(-u, -v) (4.6-14)

If I(x, y) is imaginary, its Fourier transform is conjugate antisymmetric: F*( -u, -v) = - F(u, v). The proof of Eq. (4.6-14) is as follows:

[ J* M-J N-J

F*(u, v) ]セ@ セI@ I(x, y)e-j21T(lIx/M+lly/N)

As an exercise. you should use Eq. (4.6-12b) to convince yourself that this 2-D sequence is odd.

Conjugate symmetry also is called hermitian sym-metry, The term antihermitian is used sometimes to refer to conjugate antisymmctry.

Page 264: Digital image processing   3 e

264 Chapter 4 Filtering in the Frequency Domain

TABLE 4.1 Some symmetry properties of the 2-D Off and its inverse. R(ll. u) and J(II, 11) are the real and imaginary parts of F(u, v), respectively. The term complex indicates that a function has nonzero real and imaginary parts.

M-j N-I

2: 2: {'(x, y)e j27T(IIX/M+u y/N)

クセo@ y=O

M·-I N-i 2: 2: f(x, y)e-j27T([-u]x/M+[-I']y/Nl

クセo@ v=O

= F( -u, -v)

where the third step follows from the fact that f(x, y) is real. A similar ap-proach can be used to prove the conjugate antisymmetry exhibited by the transform of imaginary functions.

Table 4.1 lists symmetries and related properties of the DFT that are useful in digital image processing. Recall that the double arrows indicate Fourier transfmm pairs; that is, for any row in the table, the properties on the right are satisfied by the Fourier transform of the function having the properties listed on the left, and vice versa. For example, entry 5 reads: The DFT of a real function f(x, y), in which (x, y) is replaced by (-x, -y), is F*(u, v), where F(u, 1J), the DFT of f(x, y). is a complex function, and vice versa.

Spatial Domain t Frequency Domain t

1) f(x. y) real <=> F'(lI, v) = F( -11, -v)

2) [(x. y) imagi nary <=> F'(-u, -v) = -F(u, l!)

3) [(x,y) real <=> R(Il. v) even; J(II, v) odd

4) [(x,y) imaginary <=> R(u. v) odd; 1(11. v) even

5) fe-x, -y) real <=> F*(u. v) complex

6) f( - x. - y) complex <=> F(-lt, v) complex

7) {(x, y) complex <=> F'(-u - v) complex

8) [(x, y) real and even <=> F(ll. 11) real and even

9) [(x, y) real and odd <=> F(II. v) imaginary and odd

10) lex. y) imaginary and even <=> F(u, カセ@ imaginary and even

11) f(x, y) imaginary and odd <=> F(lI. v) real and odd

12) f(x. y) complex and ewn <=> F(II. v) complex and even

13) f(x, y) complex and odd <=> F(II. v) complex and odd

'Recall that X.y.ll. and IJ arc "ilcrete (II1tegcrJ vanables. wlth.r and 1/ m the range [0. M - IJ. and v. and I' in the range [0. ,'II I j. 'Ib セ。ケ@ that a complex function is evell means that ih real alld imaginary parts arc even. and similarlv for an odd complex function.

Page 265: Digital image processing   3 e

4.6 Some Properties of the 2-D Discrete Fourier Transform 265

セ@ With reference to the even and odd concepts discussed earlier and illustrat-ed in Example 4.10, the following J -D sequences and their transforms are short examples of the properties listed in Table 4.1. The numbers in parenthe-ses on the right are the individual elements of F(u), and similarly for I(x) in the last two properties.

Property fIx) 1"(11)

3 'II 2 3 4} = {(10)(-2 + 2j)(-2) (-2 - 2j)} 4 }{I 2 3 4} = {(2.5}) (.5 - .5}) (-.5}) (-.5 -.. 5})}

8 {2 1 I 1 } = {(5) (1)(1) (1)}

9 {O -I () I} = {(OJ (2j) (0) (-2j))

10 j{2 I I 1 } = {(5}) (j) (j) U)} 11 j{O -I o I} = {(O) (-2) (0) (2)} .

12 { (4 + 4j) (3 + 2j) (0 + 2j) (3 + 2})} = {flO + JOj)(4 + 2j) (-2 + 2j) (4 + 2j)} 13 {to + OJ) (1 + Ij) (0 + OJ) (-1 - j)} = {(O + OJ) (2 - 2j) (0 + OJ) (-2 + 2j)}

For example, in property 3 we see that a real function with elements { 1 2 3 4} has Fourier transform whose real part, {] (} - 2 - 2 - 2 }, is even and whose imaginary part, {O 2 0 -2}, is odd. Property 8 tells us that a real even function has a transform that is real and even also. Property 12 shows that an even complex function has a transform that is also complex and even. The other property examples are analyzed in a similar manner.

セゥ@ In this example, we prove several of the properties in Table 4.1 to develop familiarity with manipulating these important properties, and to establish a basis for solving some of the problems at the end of the chapter. We prove only the properties on the right given the properties on the left. The converse is proved in a manner similar to the proofs we give here.

Consider..property 3, which reads: If I(x, y) is a real function. the real part of its DFT is even and the odd part is odd; similarly, if a DFT has real and imaginary parts that are even and odd, respectively, then its IDFT is a real function. We prove this property formally as follows. F(u, v) is complex in general, so it can be expressed as the sum of a real and an imaginary part: F(u, v) = R(u, v) + jl(u, v). Then, F'(u. v) = R(ll, v) - jl(u, v). Also, F( -If, -v) = R( -II, -v) + jl( -u, -v). But, as proved earlier. if I(x, y) is real then F*(u, v) = F( -u, -v), which, based on the preceding two equations, means that R(u, v) = R( -u, --v) and I(ll, v) = -/(-11. -v). In vjew of Eqs. (4.6-11a) and (4.6-11 b), this proves that R is an even function and I is an odd function.

Next, we prove property 8. If [(x, y) is real we know from property 3 that the real part of F(I/, v) is even, so to prove property 8 all we have to do is show that if I(x, y) is real and even then the imaginary part of F(lI. v) is 0 (i.e., F is real). The steps are as follows:

M-] N-I

F(u. v) = 2: 2:I(x, v)e- i2rr (flx/M+I'\/N)

x=o \'=(}

which we can write as

EXAMPLE 4.11: 1-0 ill ustra tions of properties from Table 4.1.

EXAMPLE 4.12: Proving several symmetry properties of the DFT from Tahle 4.1.

Page 266: Digital image processing   3 e

266 Chapter 4 • Filtering in the Frequency Domain

Note that we are not making a change of variable here. We are evaluating the DFT of f( - x, - y), so we simply insert this function into the equation, as we would any other function.

M-] N-]

F(u, V) = セ@ セ@ [fr(X, y)]e-j217'(llx/M+vy/N)

x=O y=O

M-J N-J = セ@ セ@ [fr(X, y)]e-j217'(ux/M)e-j2rr(vy/N)

x=O y=O

M-J N-l

= セ@ セ@ [even][even - jodd][even - jodd] x=O y=O

M-1N-l

= セ@ セ@ [even][ even' even - 2jeven' odd - odd· odd] x=O y=O

M-J N-l M-J N-l

セ@ セ@ [even' even] - 2j セ@ セ@ [even· odd] x=O y=O x=o y=O

M-I N-]

セ@セ@ [even· even] x=O y=O

== real

The fourth step follows from Euler's equation and the fact that the cos and sin are even and odd functions, respectively. We also know from property 8 that, in addition to being real,! is an even function. The only term in the penultimate line containing imaginary components is the second term, which is 0 according to Eg. (4.6-14). Thus, if [is real and even then F is real. As noted earlier, F is also even because fis real. This concludes the proof.

Finally, we prove the validity of property 6. From the definition of the DFT,

M-] N-l

セサヲHMクL@ -y)} セ@ "'2.f( -x, _y)e-j217'(llX/M+vy/N)

x=O y=O

Because of periodicity, fe-x, -y) == f(M - x,N y). If we now define m = M x and n = N - y, then

M-l N-l セサAHMクL@ -y)} == セ@ セ@ f(m, n)e-j2rr(u[M-mj/M-rv[N-nJ/N)

m=O n=O

(To convince yourself that the summations are correct, try a I-D transform and expand a few terms by hand.) Because exp[ - j27T(integer)] = 1, it follows that

Page 267: Digital image processing   3 e

4.6 !at Some Properties of the 2-D Discrete Fourier Transform 267

M-J N-J

セサヲH@ -x, -y)} := L L f(m, n)ej21T(lIm/M+vn/N)

m=O n=O

= F(-u,-v)

This concludes the proof.

Fourier Spectrum.llnd Phase Angle

Because the 2-D DFT is complex in general, it can be expressed in polar form:

F(u, v) = /F(u, v)/ej q,(1I·1I) (4.6-15)

where the magnitude

[ ') 2 J1

/2

/F(u, v)1 = rセHオL@ v) + I (u, v) (4.6-16)

is called the Fourier (or frequency) spectrum, and

[ J(u, v) ]

cf>(u, v) = arctan R(u, v) (4.6-17)

is the phase angle. Recall from the discussion in Section 4.2.1 that the arctan must be computed using a four-quadrant arctangent, such as MATLAB's atan2 (Imag, Real) function.

Finally, the power spectrum is defined as

P(u, v) = /F(u, v)12

= R2(u, v) + J2(U, v) (4.6-18)

As before, Rand J are the real and imaginary parts of F(u, v) and all compu-tations are carried out for the discrete variables u = 0, 1, 2, ... , M - 1 and v = 0,1,2, ... , N 1. Therefore, IF(u, v)l, cfJ(u, v), and P(u, v) are arrays of size M X N.

The Fourier transform of a real function is conjugate symmetric [Eq. (4.6-14)], which implies that the spectrum has even symmetry about the origin:

IF(u, v)1 = IF( -u, -v)1 (4.6-19)

The phase angle exhibits the following odd symmetry about the origin:

cfJ(u, v) = -cfJ( -u, -v)

It follows from Eg. (4.5-15) that

M-j N-j

F(O, 0) = L Lf(x, y) x=O y=O

(4.6-20)

Page 268: Digital image processing   3 e

268 Chapter 4 .. Filtering in the Frequency Domain

a b c d

FIGURE 4.24 (a) Image. (b) Spectrum showing bright spots in the four corners. (c) Centered spectrum. (d) Result showing increased detail after a log transformation. The zero crossings of the spectrum are closer in the vertical direction because the rectangle in (a) is longer in that direction. The coordinate convention used throughout the book places the origin of the spatial and frequency domains at the top left.

x

u

which indicates that the value of f(x, y). That is.

F(O,O)

Ii

11

term is

Jf - ) .'11 J

\:)

where 1 denotes the a\crage value ofr rhen.

0)

Because the proportionality cOllStant the largest component of the magnitude larger than other terms.

v)

are zero at the ori,Jin. F(O. 0) sometimes is called (lh"

EXAMPLE 4.13: The 2-D Fourier spectrum of a simple function.

transform. This direct current (i

セ@ Figure 4.24( a) sho\\ whose values \'.crc scaled 1() JhL

origins of both are a pparen t i

to the uverag.c

1)

Page 269: Digital image processing   3 e

4.6 Some Properties of the 2-D Discrete Fourier Transform 269

transform contains the highest values (and thus appears brighter in the image). However, note that the four corners of the spectrum contain similarly high values. The reason is the periodicity discussed in the previous section. To center the spectrum, we simply multiply the image in (a) by HセQ@ y+\ before computing the DFT, as indicated in Eg. (4.6-8). Figure 4.22( c) shows the result, which clearly is much easier to visualize (note the symmetry about the center point). Because the de term dominates the values of the spectrum, the dynamic range of other intensities in the displayed image are compressed. To bring out those details, we perform a log transformation, as described in Section 3.2.2. Figure 4.24(d) shows the display of (1 + log/Feu, [:)1). I11C increased rendition of detail is evident. Most spectra shown in this and subsequcnt chapters are scaled in this manner.

It follows from Eqs. (4.6-4) and (4.6-5) thai the spectrum is insensitive to image translation (the absolute value of the exponential term is 1), but it rotates by the same angle of a rotated image. Figure 4.25 illustrates these properties. The spectrum in Fig. 4.25(b) is identical 10 the spectrum in Fig. 4.24( d). Clearly, the images in Figs. 4.24(a) and 4.25(a) are different, so if their Fourier spectra are the same then, based on Eg. (4.6-1 their phase angles must he different. Figure 4.26 confirms this. Figures 4.26( a) and (b) are thc phase angle arrays (shown as images) of the DFTs of Figs. and 4.25(a). Note the lack of similarity between the phase images, in spite of the fact that the only differences between their corresponding images is simple translation. In general, visual analysis of phase angle images yields little intuitive information. For due to its 45° orientation, one would expect intuitively that the phase

a b c d

FIGURE 4.25

and (b) the

spectrum. Tne spectrum

to

identical to the spectrurn

Page 270: Digital image processing   3 e

270 Chapter 4 • Filtering in the Frequency Domain

EXAMPLE 4.14: Further illustration of the properties of the Fourier spectrum and phase angle.

abc FIGURE 4.26 Phase angle array corresponding (a) to the image of the centered rectangle in Fig. 4.24(a). (b) to the translated image in Fig. 4.25(a), and (c) to the rotated image in Fig.4.25(c).

Fig.4.26(a) should correspond to the rotated image in Fig. 4.25(c). rather than to the image in Fig. 4.24(a). In fact. as Fig. 4.26(c) shows, the phase angle of the ro-tated image has a strong orientation that is much less than 45".

The components of the spectrum of the OFT determine the amplitudes of the sinusoids that combine to form the resulting image. At any given frequen-cy in the DFT of an image. a large amplitude implies a greater prominence of a sinusoid of that frequency in the image. Conversely. a small amplitude im-plies that less of that sinusoid is present in the image. Although. as Fig. 4.26 shows, the contribution of the phase components is less intuitive, it is just as important. The phase is a measure of displacement of the various sinusoids with respect to their origin. Thus, while the magnitude of the 2-D DFT is an array whose components determine the intensities in the image, the com> sponding phase is an array of angles that carry much of the information about where discern able objects are located in the image. Inc following example clarifies these concepts further.

'l!ll Figure 4.27(b) is the phase angle of the DFT of Fig. There is no 、・セ@tail in this array that would lead us by visual analysis to associate it with fea-tures in its corresponding image (not even the symmetry of the IS

visible). However, the importance of the phase in determining shape charac-teristics is evident in Fig. 4.27(c), which was obtained by computing the inverse DFT of Eq. (4.6-15) using only phase information with IF(u, u)i= I in the equation). Although the has been lost that information is carried by the features in this image are unmistakably from

Figure 4.27 (d) was obtained puting the inverse DFT. This means the turn implies setting the phase to O.'1ne result is not It 」ッョエ。ゥョセ@only intensity information. with the de term !\

no shape information in the was set to zero.

Page 271: Digital image processing   3 e

4.6 Some Properties of the Rセd@ Discrete Fourier Transform 271

• 'aVe セTGセ[エ@

FIGURE 4.27 (a) Woman. (b) Phase angle. (c) Woman reconstructed only the phase angle. (d) Woman reconstructed using only the speetrum. (e) Reeonstruetion using the phase angle corresponding to the woman and the spectrum corresponding to the rectangle in Fig. 4.24(a). (f) Reconstruction using the phase, of the rectangle and the spectrum of the woman.

Finally, Figs. 4.27 (e) and (f) show termining the feature content of an puting the IDFT of Eg. HTNVセ@ J 5) using the and the phase angle corresponding to the woman. The clearly dominates this result. the dominates which was computed using the of the woman and the the rectangle.

4.6.6 The 2-D Convolution

Extending Eg. HTNTセ@ 10) 10 two variables results in the 2-D circular convolution:

[(x, y) * /z(x.

for x = 0, 1,2, ... , 1\1 Eq. (4.6-23) gives one theorem is gIven bv the

and, conversely,

for

Page 272: Digital image processing   3 e

272 Qapter 4 .. Filtering in the Frequency Domain

We discuss efficient ways to compute the I? IT in Section 4.11.

f(x, y)h(x, y) ¢=> F(u, V) * H(u, V) (4.6-25)

where F and H are obtained using Eq. (4.5-15) and, as before, the double arrow is used to indicate that the left and right sides of the expressions consti-tute a Fourier transform pair. Our interest in the remainder of this chapter is in Eq. (4.6-24), which states that the inverse DFT of the product F(u, v)H(u, v) yields f(x, y) * hex, y), the 2-D spatial convolution of f and h. Similarly, the OFf of the spatial convolution yields the product of the transforms in the fre-quency domain. Equation (4.6-24) is the foundation of linear filtering and, as explained in Section 4.7, is the basis for all the filtering techniques discussed in this chapter.

Because we are dealing here with discrete quantities, computation of the Fourier transforms is carried out with a Off algorithm. If we elect to compute the spatial convolution using the IOFf of the product of the two transforms, then the periodicity issues discussed in Section 4.6.3 must be taken into ac-count. We give a I-D example of this and then extend the conclusions to two variables. The left column of Fig. 4.28 implements convolution of two functions, f and h, using the I-D equivalent of Eq. (3.4-2) which, because the two func-tions are of same size, is written as

399

f(x) * hex) = '2J(x)h(x - m) 111=0

This equation is identical to Eq. (4.4-10), but the requirement on the displace-ment x is that it be sufficiently large to cause the flipped (rotated) version of h to slide completely past f In other words, the procedure consists of (1) mirror-ing h about the origin (i.e., rotating it by 180°) [Fig. 4.28( c) j, (2) translating the mirrored function by an amount x [Fig. 4.28(d)], and (3) for each value x of translation, computing the entire sum of products in the right side of the pre-ceding equation. In terms of Fig. 4.28 this means multiplying the function in Fig. 4.28(a) by the function in Fig.4.28(d) for each value of x. The displacement x ranges over all values required to completely slide h across f Figure 4.28( e) shows the convolution of these two functions. Note that convolution is a func-tion of the displacement variable, x, and that the range of x required in this ex-ample to completely slide h past fis from 0 to 799.

If we use the OFT and the convolution theorem to obtain the same result as in the left column of Fig. 4.28, we must take into account the periodicity inher-ent in the expression for the DFT. This is equivalent to convolving the two pe-riodic functions in Figs. 4.28(f) and (g). The convolution procedure is the same as we just discussed, but the two functions now art periodic. Proceeding with these two functions as in the previous paragraph would yield the result in Fig. 4.28(j) which obviously is incorrect. Because we are convolv;'lg two peri-odic functions, the convolution itself is periodic. The closeness of the periods in Fig. 4.28 is such that they interfere with each other to cause what is commonly referred to as wraparound error. According to the convolution theorem, if we had computed the OFT of the two 400-point functions,f and 11, multiplied the

Page 273: Digital image processing   3 e

4.6 II Some Properties of the 2-D Discrete Fourier Transform 273

f(m) f(m)

rrb 1'1 ...........

III . m •••••• "'0'

31---..,

o 200 400 0 200 400 h(m) h(m)

... ,-, '6 セBBBBGN@ .. .........

1 1 I l . m ...... • ....... u . m

2

o 200 400 0 200 400 h(-m) h(-m)

A-I--+--I I _. m

j""""'j ..... ,,'" L--/_-+ __ ;,- ; .. 111

o 200 400 o 200 400 h(x - m) h(x セ@ 111)

Jx,_. -L;. ,%

JD"f!-+i ---t-I -+1 -----. m LIO_+1 セゥ⦅エMi@ .i"l_-<-i --.:1 __ . m o 200 400 0 200 400

f(x). g(x) f(x). g(x)

1200 1200+ r-\

600 I£......HHH-4-+-+-+-x |Oセセゥゥ|NLON@ x

o 200 400 600 SOO o 200 400 --I Range of 1-Fourier transform

computation

two transforms, and then computed the inverse DFT, we would have obtained the erroneous 400-point segment of the convolution shown in Fig. 4.28(j).

Fortunately, the solution to the wraparound error problem is simple. Consider two functions,f(x) and h(x) composed of A and B samples, respectively. It can be shown (Brigham [1988]) that if we append zeros to both functions so that they have the same length, denoted by P, then wraparound is avoided by choosing

P?:A+B-l (4.6-26)

In our example, each function has 400 points, so the minimum value we could use is P = 799, which implies that we would append 399 zeros to the trailing edge of each function. This process is called zero padding. As an exercise, you

a f b g c h d j e j

fiGURE 4.28 Left column: convolution of two discrete functions obtained using the approach discussed in Section 3.4.2. The result in (e) is correct. Right column: Convolution of the same functions. but taking into account the periodicity implied by the OFT. Note in (j) how data from adjacent periods produce wraparound error, yielding an incorrect convolution result. To obtain the correct result, function padding must be used.

Th'c zeros c{)ulJ he appt'nueJ also to the hcginning of the func-tions. or they could he divided helwccn the heginning anu end 01 the functions. Jt is simpler 10 append them <It the cnd.

Page 274: Digital image processing   3 e

274 a..ter 4 • Filtering in the Frequency Domain

should convince yourself that if the periods of the functions in Figs. 4.28(f) and (g) were lengthened by appending to each period at least 399 zeros, the result would be a periodic convolution in which each period is identical to the correct result in Fig. 4.28( e). Using the DFf via the convolution theorem would result in a 799-point spatial function identical to Fig. 4.28(e). The conclusion, then, is that to obtain the same convolution result between the "straight" representa-tion of the convolution equation approach in Chapter 3, and the DFf ap-proach, functions in the latter must be padded prior to computing their transforms.

Visualizing a similar example in 2-D would be more difficult, but we would arrive at the same conclusion regarding wraparound error and the need for ap-pending zeros to the functions. Let I(x, y) and hex, y) be two image arrays of sizes A x Band C x D pixels, respectively. Wraparound error in their circular convolution can be avoided by padding these functions with zeros, as follows:

I (. ) = {/(X,Y) p x,y 0

and

h ( ) = .{h(X,Y) p x, y 0

with

and

o s x s A-I and 0 s y s B-1

A s X :s P or B:s y :S Q

o s X :S C - 1 and O:s y :S D - 1

C :S X :S P or D s y :S Q

P2:A+C-}

Q2:B+D-l

(4.6-27)

(4.6-28)

(4.6-29)

(4.6-30)

The resulting padded images are of size P x Q. If both arrays are of the same size, M x N, then we require that

P 2: 2M-1 (4.6-31)

and

Q 2: 2N-1 (4.6-32)

We give an example in Section 4.7.2 showing the effects of wraparound error on images. As rule, DFf algorithms tend to execute faster with arrays of even size, so it is good practice to select P and Q as the smallest even integers that satisfy the preceding equations. If the two arrays are of the same size, this means that P and Q are selected as twice the array size.

The two functions in Figs. 4.28(a) and (b) conveniently become zero before the end of the sampling interval. If one or both of the functions were not zero at

Page 275: Digital image processing   3 e

4.6 fi Some Properties of the 2-D Discrete Fourier Transform 275

the end of the interval, then a discontinuity would be created when zeros were appended to the function to eliminate wraparound error. This is analogous to mUltiplying a function by a box, which in the frequency domain would imply convolution of the original transform with a sinc function (see Example 4.1). This, in turn, would create so-called frequency leakage, caused by the high-frequency components of the sinc function. Leakage produces a blocky effect on images. Although leakage never can be totally eliminated. it can be reduced significantly by mUltiplying the sampled function by another function that ta-pers smoothly to near zero at both ends of the sampled record to dampen the sharp transitions (and thus the high frequency components) of the box. This ap-proach, called windowing or apodizing, is an important consideration when fi-delity in image reconstruction (as in high-definition graphics) is desired. If you are faced with the need for windowing, a good approach is to use a 2-D Gaussian function (see Section 4.8.3). One advantage of this function is that its Fourier transform is Gaussian also, thus producing low leakage.

4.6.7 Summary of 2-D Discrete Fourier Transform Properties Table 4.2 summarizes the principal DFT definitions introduced in this chapter. Separability is discussed in Section 4.11.1 and obtaining the inverse using a forward transform algorithm is discussed in Section 4.11.2. Correlation is dis-cussed in Chapter 12.

Name

1) Discrete Fourier transform (DFT) of f(x, y)

2) Inverse discrete Fourier transform (10FT) of F(u, v)

3) Polar representation

4) Spectrum

5) Phase angle

6) Power spectrum

7) Average value

Expression(s)

M-I N-J

F(u, v) = 2: 2:f(x, y)e-j2tr (IIX/M+v y/N)

x=o y=o

1 M-J N-J f(x, y) = N 2: 2: F(u, v)ej27r(lIx/M+l'y/N)

M u=O 1.'=0

F(u, v) = IF(u, v) I ef4>(II. lJ)

]

1/2 IF (u, v)1 = [R2(u, v) + [2(u, v)

R = Real(F); J = Imag(F)

-I[ leu, v) J 4>(11, v) = tan R(u, v)

P(u, v) = IF(u, v)12

1 M-] N-]

J(x, y) = MN 2: 2: [(x, y) x=O 1,=0

I MN F(O, 0)

( Continued)

A simple apodizing func-tion is a triangle. cen-tered on the data record, which tapers to 0 at both ends of the record, This is called the Bartlell win-dow, Other common キゥョセ@dows are the Hamming and the Hann windows, We can even use a Gaussian function. We return to the issue of windowing in Section 5.11.5.

TABLE 4.2 Summary of DFT definitions and corresponding expressions.

Page 276: Digital image processing   3 e

276 Chapter 4 • Filtering in the Frequency Domain

TABLE 4.2 ( Continued)

TABLE 4.3 Summary of DFf pairs. The closed-form expressions in 12 and 13 are valid only for continuous variables. They can be used with discrete variables by sampling the closed-form, continuous expressions.

Name Expression(s)

8) Periodicity (k l and k2 are integers)

F(u, v) = F(u + kiM, v) = F(u, v + k2N) = F(u + kiM, v + kzN)

9) Convolution

10) Correlation

11) Separability

f(x, y) = f(x + kiM, y) = f(x, y + kzN)

= f(x + kiM, y + k2N) M-I N-J

f(x, y)*h(x, y) = L Lf(m,n)h(x - m,Y - n) ュセo@ ョセo@

M-I N-I

f(x,y)1:.h(x,y) = L Lf(m,n)h(x + m,y + n) m=O n=O

The 2-D DFf can be computed by computing 1-D DFf transforms along the rows (columns) of the image, followed by 1-D transforms along the columns (rows) of the result. See Section 4.11.1.

M-I N-I

12) Obtaining the inverse Fourier transform using a forward transform algorithm.

MNf*(x, y) = L LF*(u, v)e-j2Tr(ux/M+vy/N)

u=O t.=o This equation indicates that inputting F*(u, v) into an algorithm that computes the forward transform (right side of above equation) yields MNj*(x, y). Taking the complex conjugate and dividing by MN gives the desired inverse. See Section 4.11.2.

Table 4.3 summarizes some important DFT pairs. Although our focus is on discrete functions, the last two entries in the table are Fourier transform pairs that can be derived only for continuous variables (note the use of continuous variable notation). We include them here because, with proper interpretation, they are quite useful in digital image processing. The differentiation pair can

Name

1) Symmetry properties

2) Linearity

3) Translation (general)

4) Translation

to center of the frequency rectangle, (MI2. NI2)

5) Rotation

I! 6) Convolution

theoremt

DFf Pairs

See Table 4.1

a/J(x, y) + bh(x, y) = aF1(u, v) + bFiu, v)

f(x, y)ej27T(UoX/M+uOy/N) = F(u - /la, v - vo)

f(x - Xo, Y - Yo) = F(u, v)e-j2rr(uxoIM+vY,IN)

f(x, y)( -1)x+ y = F(u M /2, v- N /2)

f(x M/2, y - N /2) = F(u . .JJ)( _l)u+v

fer, e + eo) = F(w, 'P + eo) x = r cos e y = r sin () u = w cos 'P v = w sin 'P

f(x, y) * hex. v) = F(u, v)H(u, v)

f(x, y)h(x, y) = F(u, v) * H(u, v)

( Continued)

Page 277: Digital image processing   3 e
Page 278: Digital image processing   3 e
Page 279: Digital image processing   3 e
Page 280: Digital image processing   3 e
Page 281: Digital image processing   3 e
Page 282: Digital image processing   3 e
Page 283: Digital image processing   3 e
Page 284: Digital image processing   3 e
Page 285: Digital image processing   3 e
Page 286: Digital image processing   3 e
Page 287: Digital image processing   3 e
Page 288: Digital image processing   3 e
Page 289: Digital image processing   3 e
Page 290: Digital image processing   3 e
Page 291: Digital image processing   3 e
Page 292: Digital image processing   3 e
Page 293: Digital image processing   3 e
Page 294: Digital image processing   3 e
Page 295: Digital image processing   3 e
Page 296: Digital image processing   3 e
Page 297: Digital image processing   3 e
Page 298: Digital image processing   3 e
Page 299: Digital image processing   3 e
Page 300: Digital image processing   3 e
Page 301: Digital image processing   3 e
Page 302: Digital image processing   3 e
Page 303: Digital image processing   3 e
Page 304: Digital image processing   3 e
Page 305: Digital image processing   3 e
Page 306: Digital image processing   3 e
Page 307: Digital image processing   3 e
Page 308: Digital image processing   3 e
Page 309: Digital image processing   3 e
Page 310: Digital image processing   3 e
Page 311: Digital image processing   3 e
Page 312: Digital image processing   3 e
Page 313: Digital image processing   3 e
Page 314: Digital image processing   3 e
Page 315: Digital image processing   3 e
Page 316: Digital image processing   3 e
Page 317: Digital image processing   3 e
Page 318: Digital image processing   3 e
Page 319: Digital image processing   3 e
Page 320: Digital image processing   3 e
Page 321: Digital image processing   3 e
Page 322: Digital image processing   3 e
Page 323: Digital image processing   3 e
Page 324: Digital image processing   3 e
Page 325: Digital image processing   3 e
Page 326: Digital image processing   3 e
Page 327: Digital image processing   3 e
Page 328: Digital image processing   3 e
Page 329: Digital image processing   3 e
Page 330: Digital image processing   3 e
Page 331: Digital image processing   3 e
Page 332: Digital image processing   3 e
Page 333: Digital image processing   3 e
Page 334: Digital image processing   3 e
Page 335: Digital image processing   3 e
Page 336: Digital image processing   3 e
Page 337: Digital image processing   3 e
Page 338: Digital image processing   3 e
Page 339: Digital image processing   3 e
Page 340: Digital image processing   3 e
Page 341: Digital image processing   3 e
Page 342: Digital image processing   3 e
Page 343: Digital image processing   3 e
Page 344: Digital image processing   3 e
Page 345: Digital image processing   3 e
Page 346: Digital image processing   3 e
Page 347: Digital image processing   3 e
Page 348: Digital image processing   3 e
Page 349: Digital image processing   3 e
Page 350: Digital image processing   3 e
Page 351: Digital image processing   3 e
Page 352: Digital image processing   3 e
Page 353: Digital image processing   3 e
Page 354: Digital image processing   3 e
Page 355: Digital image processing   3 e
Page 356: Digital image processing   3 e
Page 357: Digital image processing   3 e
Page 358: Digital image processing   3 e
Page 359: Digital image processing   3 e
Page 360: Digital image processing   3 e
Page 361: Digital image processing   3 e
Page 362: Digital image processing   3 e
Page 363: Digital image processing   3 e
Page 364: Digital image processing   3 e
Page 365: Digital image processing   3 e
Page 366: Digital image processing   3 e
Page 367: Digital image processing   3 e
Page 368: Digital image processing   3 e
Page 369: Digital image processing   3 e
Page 370: Digital image processing   3 e
Page 371: Digital image processing   3 e
Page 372: Digital image processing   3 e
Page 373: Digital image processing   3 e
Page 374: Digital image processing   3 e
Page 375: Digital image processing   3 e
Page 376: Digital image processing   3 e
Page 377: Digital image processing   3 e
Page 378: Digital image processing   3 e
Page 379: Digital image processing   3 e
Page 380: Digital image processing   3 e
Page 381: Digital image processing   3 e
Page 382: Digital image processing   3 e
Page 383: Digital image processing   3 e
Page 384: Digital image processing   3 e
Page 385: Digital image processing   3 e
Page 386: Digital image processing   3 e
Page 387: Digital image processing   3 e
Page 388: Digital image processing   3 e
Page 389: Digital image processing   3 e
Page 390: Digital image processing   3 e
Page 391: Digital image processing   3 e
Page 392: Digital image processing   3 e
Page 393: Digital image processing   3 e
Page 394: Digital image processing   3 e
Page 395: Digital image processing   3 e
Page 396: Digital image processing   3 e
Page 397: Digital image processing   3 e
Page 398: Digital image processing   3 e
Page 399: Digital image processing   3 e
Page 400: Digital image processing   3 e
Page 401: Digital image processing   3 e
Page 402: Digital image processing   3 e
Page 403: Digital image processing   3 e
Page 404: Digital image processing   3 e
Page 405: Digital image processing   3 e
Page 406: Digital image processing   3 e
Page 407: Digital image processing   3 e
Page 408: Digital image processing   3 e
Page 409: Digital image processing   3 e
Page 410: Digital image processing   3 e
Page 411: Digital image processing   3 e
Page 412: Digital image processing   3 e
Page 413: Digital image processing   3 e
Page 414: Digital image processing   3 e
Page 415: Digital image processing   3 e
Page 416: Digital image processing   3 e
Page 417: Digital image processing   3 e
Page 418: Digital image processing   3 e
Page 419: Digital image processing   3 e
Page 420: Digital image processing   3 e
Page 421: Digital image processing   3 e
Page 422: Digital image processing   3 e
Page 423: Digital image processing   3 e
Page 424: Digital image processing   3 e
Page 425: Digital image processing   3 e
Page 426: Digital image processing   3 e
Page 427: Digital image processing   3 e
Page 428: Digital image processing   3 e
Page 429: Digital image processing   3 e
Page 430: Digital image processing   3 e
Page 431: Digital image processing   3 e
Page 432: Digital image processing   3 e
Page 433: Digital image processing   3 e
Page 434: Digital image processing   3 e
Page 435: Digital image processing   3 e
Page 436: Digital image processing   3 e
Page 437: Digital image processing   3 e
Page 438: Digital image processing   3 e
Page 439: Digital image processing   3 e
Page 440: Digital image processing   3 e
Page 441: Digital image processing   3 e
Page 442: Digital image processing   3 e
Page 443: Digital image processing   3 e
Page 444: Digital image processing   3 e
Page 445: Digital image processing   3 e
Page 446: Digital image processing   3 e
Page 447: Digital image processing   3 e
Page 448: Digital image processing   3 e
Page 449: Digital image processing   3 e
Page 450: Digital image processing   3 e
Page 451: Digital image processing   3 e
Page 452: Digital image processing   3 e
Page 453: Digital image processing   3 e
Page 454: Digital image processing   3 e
Page 455: Digital image processing   3 e
Page 456: Digital image processing   3 e
Page 457: Digital image processing   3 e
Page 458: Digital image processing   3 e
Page 459: Digital image processing   3 e
Page 460: Digital image processing   3 e
Page 461: Digital image processing   3 e
Page 462: Digital image processing   3 e
Page 463: Digital image processing   3 e
Page 464: Digital image processing   3 e
Page 465: Digital image processing   3 e
Page 466: Digital image processing   3 e
Page 467: Digital image processing   3 e
Page 468: Digital image processing   3 e
Page 469: Digital image processing   3 e
Page 470: Digital image processing   3 e
Page 471: Digital image processing   3 e
Page 472: Digital image processing   3 e
Page 473: Digital image processing   3 e
Page 474: Digital image processing   3 e
Page 475: Digital image processing   3 e
Page 476: Digital image processing   3 e
Page 477: Digital image processing   3 e
Page 478: Digital image processing   3 e
Page 479: Digital image processing   3 e
Page 480: Digital image processing   3 e
Page 481: Digital image processing   3 e
Page 482: Digital image processing   3 e
Page 483: Digital image processing   3 e
Page 484: Digital image processing   3 e
Page 485: Digital image processing   3 e
Page 486: Digital image processing   3 e
Page 487: Digital image processing   3 e
Page 488: Digital image processing   3 e
Page 489: Digital image processing   3 e
Page 490: Digital image processing   3 e
Page 491: Digital image processing   3 e
Page 492: Digital image processing   3 e
Page 493: Digital image processing   3 e
Page 494: Digital image processing   3 e
Page 495: Digital image processing   3 e
Page 496: Digital image processing   3 e
Page 497: Digital image processing   3 e
Page 498: Digital image processing   3 e
Page 499: Digital image processing   3 e
Page 500: Digital image processing   3 e
Page 501: Digital image processing   3 e
Page 502: Digital image processing   3 e
Page 503: Digital image processing   3 e
Page 504: Digital image processing   3 e
Page 505: Digital image processing   3 e
Page 506: Digital image processing   3 e
Page 507: Digital image processing   3 e
Page 508: Digital image processing   3 e
Page 509: Digital image processing   3 e
Page 510: Digital image processing   3 e
Page 511: Digital image processing   3 e
Page 512: Digital image processing   3 e
Page 513: Digital image processing   3 e
Page 514: Digital image processing   3 e
Page 515: Digital image processing   3 e
Page 516: Digital image processing   3 e
Page 517: Digital image processing   3 e
Page 518: Digital image processing   3 e
Page 519: Digital image processing   3 e
Page 520: Digital image processing   3 e
Page 521: Digital image processing   3 e
Page 522: Digital image processing   3 e
Page 523: Digital image processing   3 e
Page 524: Digital image processing   3 e
Page 525: Digital image processing   3 e
Page 526: Digital image processing   3 e
Page 527: Digital image processing   3 e
Page 528: Digital image processing   3 e
Page 529: Digital image processing   3 e
Page 530: Digital image processing   3 e
Page 531: Digital image processing   3 e
Page 532: Digital image processing   3 e
Page 533: Digital image processing   3 e
Page 534: Digital image processing   3 e
Page 535: Digital image processing   3 e
Page 536: Digital image processing   3 e
Page 537: Digital image processing   3 e
Page 538: Digital image processing   3 e
Page 539: Digital image processing   3 e
Page 540: Digital image processing   3 e
Page 541: Digital image processing   3 e
Page 542: Digital image processing   3 e
Page 543: Digital image processing   3 e
Page 544: Digital image processing   3 e
Page 545: Digital image processing   3 e
Page 546: Digital image processing   3 e
Page 547: Digital image processing   3 e
Page 548: Digital image processing   3 e
Page 549: Digital image processing   3 e
Page 550: Digital image processing   3 e
Page 551: Digital image processing   3 e
Page 552: Digital image processing   3 e
Page 553: Digital image processing   3 e
Page 554: Digital image processing   3 e
Page 555: Digital image processing   3 e
Page 556: Digital image processing   3 e
Page 557: Digital image processing   3 e
Page 558: Digital image processing   3 e
Page 559: Digital image processing   3 e
Page 560: Digital image processing   3 e
Page 561: Digital image processing   3 e
Page 562: Digital image processing   3 e
Page 563: Digital image processing   3 e
Page 564: Digital image processing   3 e
Page 565: Digital image processing   3 e
Page 566: Digital image processing   3 e
Page 567: Digital image processing   3 e
Page 568: Digital image processing   3 e
Page 569: Digital image processing   3 e
Page 570: Digital image processing   3 e
Page 571: Digital image processing   3 e
Page 572: Digital image processing   3 e
Page 573: Digital image processing   3 e
Page 574: Digital image processing   3 e
Page 575: Digital image processing   3 e
Page 576: Digital image processing   3 e
Page 577: Digital image processing   3 e
Page 578: Digital image processing   3 e
Page 579: Digital image processing   3 e
Page 580: Digital image processing   3 e
Page 581: Digital image processing   3 e
Page 582: Digital image processing   3 e
Page 583: Digital image processing   3 e
Page 584: Digital image processing   3 e
Page 585: Digital image processing   3 e
Page 586: Digital image processing   3 e
Page 587: Digital image processing   3 e
Page 588: Digital image processing   3 e
Page 589: Digital image processing   3 e
Page 590: Digital image processing   3 e
Page 591: Digital image processing   3 e
Page 592: Digital image processing   3 e
Page 593: Digital image processing   3 e
Page 594: Digital image processing   3 e
Page 595: Digital image processing   3 e
Page 596: Digital image processing   3 e
Page 597: Digital image processing   3 e
Page 598: Digital image processing   3 e
Page 599: Digital image processing   3 e
Page 600: Digital image processing   3 e
Page 601: Digital image processing   3 e
Page 602: Digital image processing   3 e
Page 603: Digital image processing   3 e
Page 604: Digital image processing   3 e
Page 605: Digital image processing   3 e
Page 606: Digital image processing   3 e
Page 607: Digital image processing   3 e
Page 608: Digital image processing   3 e
Page 609: Digital image processing   3 e
Page 610: Digital image processing   3 e
Page 611: Digital image processing   3 e
Page 612: Digital image processing   3 e
Page 613: Digital image processing   3 e
Page 614: Digital image processing   3 e
Page 615: Digital image processing   3 e
Page 616: Digital image processing   3 e
Page 617: Digital image processing   3 e
Page 618: Digital image processing   3 e
Page 619: Digital image processing   3 e
Page 620: Digital image processing   3 e
Page 621: Digital image processing   3 e
Page 622: Digital image processing   3 e
Page 623: Digital image processing   3 e
Page 624: Digital image processing   3 e
Page 625: Digital image processing   3 e
Page 626: Digital image processing   3 e
Page 627: Digital image processing   3 e
Page 628: Digital image processing   3 e
Page 629: Digital image processing   3 e
Page 630: Digital image processing   3 e
Page 631: Digital image processing   3 e
Page 632: Digital image processing   3 e
Page 633: Digital image processing   3 e
Page 634: Digital image processing   3 e
Page 635: Digital image processing   3 e
Page 636: Digital image processing   3 e
Page 637: Digital image processing   3 e
Page 638: Digital image processing   3 e
Page 639: Digital image processing   3 e
Page 640: Digital image processing   3 e
Page 641: Digital image processing   3 e
Page 642: Digital image processing   3 e
Page 643: Digital image processing   3 e
Page 644: Digital image processing   3 e
Page 645: Digital image processing   3 e
Page 646: Digital image processing   3 e
Page 647: Digital image processing   3 e
Page 648: Digital image processing   3 e
Page 649: Digital image processing   3 e
Page 650: Digital image processing   3 e
Page 651: Digital image processing   3 e
Page 652: Digital image processing   3 e
Page 653: Digital image processing   3 e
Page 654: Digital image processing   3 e
Page 655: Digital image processing   3 e
Page 656: Digital image processing   3 e
Page 657: Digital image processing   3 e
Page 658: Digital image processing   3 e
Page 659: Digital image processing   3 e
Page 660: Digital image processing   3 e
Page 661: Digital image processing   3 e
Page 662: Digital image processing   3 e
Page 663: Digital image processing   3 e
Page 664: Digital image processing   3 e
Page 665: Digital image processing   3 e
Page 666: Digital image processing   3 e
Page 667: Digital image processing   3 e
Page 668: Digital image processing   3 e
Page 669: Digital image processing   3 e
Page 670: Digital image processing   3 e
Page 671: Digital image processing   3 e
Page 672: Digital image processing   3 e
Page 673: Digital image processing   3 e
Page 674: Digital image processing   3 e
Page 675: Digital image processing   3 e
Page 676: Digital image processing   3 e
Page 677: Digital image processing   3 e
Page 678: Digital image processing   3 e
Page 679: Digital image processing   3 e
Page 680: Digital image processing   3 e
Page 681: Digital image processing   3 e
Page 682: Digital image processing   3 e
Page 683: Digital image processing   3 e
Page 684: Digital image processing   3 e
Page 685: Digital image processing   3 e
Page 686: Digital image processing   3 e
Page 687: Digital image processing   3 e
Page 688: Digital image processing   3 e
Page 689: Digital image processing   3 e
Page 690: Digital image processing   3 e
Page 691: Digital image processing   3 e
Page 692: Digital image processing   3 e
Page 693: Digital image processing   3 e
Page 694: Digital image processing   3 e
Page 695: Digital image processing   3 e
Page 696: Digital image processing   3 e
Page 697: Digital image processing   3 e
Page 698: Digital image processing   3 e
Page 699: Digital image processing   3 e
Page 700: Digital image processing   3 e
Page 701: Digital image processing   3 e
Page 702: Digital image processing   3 e
Page 703: Digital image processing   3 e
Page 704: Digital image processing   3 e
Page 705: Digital image processing   3 e
Page 706: Digital image processing   3 e
Page 707: Digital image processing   3 e
Page 708: Digital image processing   3 e
Page 709: Digital image processing   3 e
Page 710: Digital image processing   3 e
Page 711: Digital image processing   3 e
Page 712: Digital image processing   3 e
Page 713: Digital image processing   3 e
Page 714: Digital image processing   3 e
Page 715: Digital image processing   3 e
Page 716: Digital image processing   3 e
Page 717: Digital image processing   3 e
Page 718: Digital image processing   3 e
Page 719: Digital image processing   3 e
Page 720: Digital image processing   3 e
Page 721: Digital image processing   3 e
Page 722: Digital image processing   3 e
Page 723: Digital image processing   3 e
Page 724: Digital image processing   3 e
Page 725: Digital image processing   3 e
Page 726: Digital image processing   3 e
Page 727: Digital image processing   3 e
Page 728: Digital image processing   3 e
Page 729: Digital image processing   3 e
Page 730: Digital image processing   3 e
Page 731: Digital image processing   3 e
Page 732: Digital image processing   3 e
Page 733: Digital image processing   3 e
Page 734: Digital image processing   3 e
Page 735: Digital image processing   3 e
Page 736: Digital image processing   3 e
Page 737: Digital image processing   3 e
Page 738: Digital image processing   3 e
Page 739: Digital image processing   3 e
Page 740: Digital image processing   3 e
Page 741: Digital image processing   3 e
Page 742: Digital image processing   3 e
Page 743: Digital image processing   3 e
Page 744: Digital image processing   3 e
Page 745: Digital image processing   3 e
Page 746: Digital image processing   3 e
Page 747: Digital image processing   3 e
Page 748: Digital image processing   3 e
Page 749: Digital image processing   3 e
Page 750: Digital image processing   3 e
Page 751: Digital image processing   3 e
Page 752: Digital image processing   3 e
Page 753: Digital image processing   3 e
Page 754: Digital image processing   3 e
Page 755: Digital image processing   3 e
Page 756: Digital image processing   3 e
Page 757: Digital image processing   3 e
Page 758: Digital image processing   3 e
Page 759: Digital image processing   3 e
Page 760: Digital image processing   3 e
Page 761: Digital image processing   3 e
Page 762: Digital image processing   3 e
Page 763: Digital image processing   3 e
Page 764: Digital image processing   3 e
Page 765: Digital image processing   3 e
Page 766: Digital image processing   3 e
Page 767: Digital image processing   3 e
Page 768: Digital image processing   3 e
Page 769: Digital image processing   3 e
Page 770: Digital image processing   3 e
Page 771: Digital image processing   3 e
Page 772: Digital image processing   3 e
Page 773: Digital image processing   3 e
Page 774: Digital image processing   3 e
Page 775: Digital image processing   3 e
Page 776: Digital image processing   3 e
Page 777: Digital image processing   3 e
Page 778: Digital image processing   3 e
Page 779: Digital image processing   3 e
Page 780: Digital image processing   3 e
Page 781: Digital image processing   3 e
Page 782: Digital image processing   3 e
Page 783: Digital image processing   3 e
Page 784: Digital image processing   3 e
Page 785: Digital image processing   3 e
Page 786: Digital image processing   3 e
Page 787: Digital image processing   3 e
Page 788: Digital image processing   3 e
Page 789: Digital image processing   3 e
Page 790: Digital image processing   3 e
Page 791: Digital image processing   3 e
Page 792: Digital image processing   3 e
Page 793: Digital image processing   3 e
Page 794: Digital image processing   3 e
Page 795: Digital image processing   3 e
Page 796: Digital image processing   3 e
Page 797: Digital image processing   3 e
Page 798: Digital image processing   3 e
Page 799: Digital image processing   3 e
Page 800: Digital image processing   3 e
Page 801: Digital image processing   3 e
Page 802: Digital image processing   3 e
Page 803: Digital image processing   3 e
Page 804: Digital image processing   3 e
Page 805: Digital image processing   3 e
Page 806: Digital image processing   3 e
Page 807: Digital image processing   3 e
Page 808: Digital image processing   3 e
Page 809: Digital image processing   3 e
Page 810: Digital image processing   3 e
Page 811: Digital image processing   3 e
Page 812: Digital image processing   3 e
Page 813: Digital image processing   3 e
Page 814: Digital image processing   3 e
Page 815: Digital image processing   3 e
Page 816: Digital image processing   3 e
Page 817: Digital image processing   3 e
Page 818: Digital image processing   3 e
Page 819: Digital image processing   3 e
Page 820: Digital image processing   3 e
Page 821: Digital image processing   3 e
Page 822: Digital image processing   3 e
Page 823: Digital image processing   3 e
Page 824: Digital image processing   3 e
Page 825: Digital image processing   3 e
Page 826: Digital image processing   3 e
Page 827: Digital image processing   3 e
Page 828: Digital image processing   3 e
Page 829: Digital image processing   3 e
Page 830: Digital image processing   3 e
Page 831: Digital image processing   3 e
Page 832: Digital image processing   3 e
Page 833: Digital image processing   3 e
Page 834: Digital image processing   3 e
Page 835: Digital image processing   3 e
Page 836: Digital image processing   3 e
Page 837: Digital image processing   3 e
Page 838: Digital image processing   3 e
Page 839: Digital image processing   3 e
Page 840: Digital image processing   3 e
Page 841: Digital image processing   3 e
Page 842: Digital image processing   3 e
Page 843: Digital image processing   3 e
Page 844: Digital image processing   3 e
Page 845: Digital image processing   3 e
Page 846: Digital image processing   3 e
Page 847: Digital image processing   3 e
Page 848: Digital image processing   3 e
Page 849: Digital image processing   3 e
Page 850: Digital image processing   3 e
Page 851: Digital image processing   3 e
Page 852: Digital image processing   3 e
Page 853: Digital image processing   3 e
Page 854: Digital image processing   3 e
Page 855: Digital image processing   3 e
Page 856: Digital image processing   3 e
Page 857: Digital image processing   3 e
Page 858: Digital image processing   3 e
Page 859: Digital image processing   3 e
Page 860: Digital image processing   3 e
Page 861: Digital image processing   3 e
Page 862: Digital image processing   3 e
Page 863: Digital image processing   3 e
Page 864: Digital image processing   3 e
Page 865: Digital image processing   3 e
Page 866: Digital image processing   3 e
Page 867: Digital image processing   3 e
Page 868: Digital image processing   3 e
Page 869: Digital image processing   3 e
Page 870: Digital image processing   3 e
Page 871: Digital image processing   3 e
Page 872: Digital image processing   3 e
Page 873: Digital image processing   3 e
Page 874: Digital image processing   3 e
Page 875: Digital image processing   3 e
Page 876: Digital image processing   3 e
Page 877: Digital image processing   3 e
Page 878: Digital image processing   3 e
Page 879: Digital image processing   3 e
Page 880: Digital image processing   3 e
Page 881: Digital image processing   3 e
Page 882: Digital image processing   3 e
Page 883: Digital image processing   3 e
Page 884: Digital image processing   3 e
Page 885: Digital image processing   3 e
Page 886: Digital image processing   3 e
Page 887: Digital image processing   3 e
Page 888: Digital image processing   3 e
Page 889: Digital image processing   3 e
Page 890: Digital image processing   3 e
Page 891: Digital image processing   3 e
Page 892: Digital image processing   3 e
Page 893: Digital image processing   3 e
Page 894: Digital image processing   3 e
Page 895: Digital image processing   3 e
Page 896: Digital image processing   3 e
Page 897: Digital image processing   3 e
Page 898: Digital image processing   3 e
Page 899: Digital image processing   3 e
Page 900: Digital image processing   3 e
Page 901: Digital image processing   3 e
Page 902: Digital image processing   3 e
Page 903: Digital image processing   3 e
Page 904: Digital image processing   3 e
Page 905: Digital image processing   3 e
Page 906: Digital image processing   3 e
Page 907: Digital image processing   3 e
Page 908: Digital image processing   3 e
Page 909: Digital image processing   3 e
Page 910: Digital image processing   3 e
Page 911: Digital image processing   3 e
Page 912: Digital image processing   3 e
Page 913: Digital image processing   3 e
Page 914: Digital image processing   3 e
Page 915: Digital image processing   3 e
Page 916: Digital image processing   3 e
Page 917: Digital image processing   3 e
Page 918: Digital image processing   3 e
Page 919: Digital image processing   3 e
Page 920: Digital image processing   3 e
Page 921: Digital image processing   3 e
Page 922: Digital image processing   3 e
Page 923: Digital image processing   3 e
Page 924: Digital image processing   3 e
Page 925: Digital image processing   3 e
Page 926: Digital image processing   3 e
Page 927: Digital image processing   3 e
Page 928: Digital image processing   3 e
Page 929: Digital image processing   3 e
Page 930: Digital image processing   3 e
Page 931: Digital image processing   3 e
Page 932: Digital image processing   3 e
Page 933: Digital image processing   3 e
Page 934: Digital image processing   3 e
Page 935: Digital image processing   3 e
Page 936: Digital image processing   3 e
Page 937: Digital image processing   3 e
Page 938: Digital image processing   3 e
Page 939: Digital image processing   3 e
Page 940: Digital image processing   3 e
Page 941: Digital image processing   3 e
Page 942: Digital image processing   3 e
Page 943: Digital image processing   3 e
Page 944: Digital image processing   3 e
Page 945: Digital image processing   3 e
Page 946: Digital image processing   3 e
Page 947: Digital image processing   3 e
Page 948: Digital image processing   3 e
Page 949: Digital image processing   3 e
Page 950: Digital image processing   3 e
Page 951: Digital image processing   3 e
Page 952: Digital image processing   3 e
Page 953: Digital image processing   3 e
Page 954: Digital image processing   3 e
Page 955: Digital image processing   3 e
Page 956: Digital image processing   3 e
Page 957: Digital image processing   3 e
Page 958: Digital image processing   3 e
Page 959: Digital image processing   3 e
Page 960: Digital image processing   3 e
Page 961: Digital image processing   3 e
Page 962: Digital image processing   3 e
Page 963: Digital image processing   3 e
Page 964: Digital image processing   3 e
Page 965: Digital image processing   3 e
Page 966: Digital image processing   3 e
Page 967: Digital image processing   3 e
Page 968: Digital image processing   3 e
Page 969: Digital image processing   3 e
Page 970: Digital image processing   3 e
Page 971: Digital image processing   3 e
Page 972: Digital image processing   3 e
Page 973: Digital image processing   3 e
Page 974: Digital image processing   3 e
Page 975: Digital image processing   3 e
Page 976: Digital image processing   3 e